Test Report: KVM_Linux_containerd 20316

                    
                      afc1769d7af9cf0fbffe1101eacbcd6e5c84f215:2025-01-27:38084
                    
                

Test fail (3/316)

Order failed test Duration
358 TestStartStop/group/no-preload/serial/SecondStart 1588.39
361 TestStartStop/group/embed-certs/serial/SecondStart 1613.13
362 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 1614.96
x
+
TestStartStop/group/no-preload/serial/SecondStart (1588.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-887091 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 02:57:21.896318 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/custom-flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-887091 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: signal: killed (26m25.952034554s)

                                                
                                                
-- stdout --
	* [no-preload-887091] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20316
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20316-1057178/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-1057178/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "no-preload-887091" primary control-plane node in "no-preload-887091" cluster
	* Restarting existing kvm2 VM for "no-preload-887091" ...
	* Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-887091 addons enable metrics-server
	
	* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 02:57:17.826407 1119007 out.go:345] Setting OutFile to fd 1 ...
	I0127 02:57:17.826674 1119007 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:57:17.826684 1119007 out.go:358] Setting ErrFile to fd 2...
	I0127 02:57:17.826688 1119007 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:57:17.826883 1119007 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-1057178/.minikube/bin
	I0127 02:57:17.827437 1119007 out.go:352] Setting JSON to false
	I0127 02:57:17.828461 1119007 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":13185,"bootTime":1737933453,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 02:57:17.828579 1119007 start.go:139] virtualization: kvm guest
	I0127 02:57:17.830766 1119007 out.go:177] * [no-preload-887091] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 02:57:17.832244 1119007 out.go:177]   - MINIKUBE_LOCATION=20316
	I0127 02:57:17.832251 1119007 notify.go:220] Checking for updates...
	I0127 02:57:17.834592 1119007 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 02:57:17.835787 1119007 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20316-1057178/kubeconfig
	I0127 02:57:17.836899 1119007 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-1057178/.minikube
	I0127 02:57:17.838103 1119007 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 02:57:17.839250 1119007 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 02:57:17.840874 1119007 config.go:182] Loaded profile config "no-preload-887091": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 02:57:17.841323 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 02:57:17.841397 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:57:17.856780 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46079
	I0127 02:57:17.857232 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:57:17.857742 1119007 main.go:141] libmachine: Using API Version  1
	I0127 02:57:17.857764 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:57:17.858054 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:57:17.858248 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
	I0127 02:57:17.858523 1119007 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 02:57:17.858848 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 02:57:17.858902 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:57:17.873721 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41431
	I0127 02:57:17.874168 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:57:17.874629 1119007 main.go:141] libmachine: Using API Version  1
	I0127 02:57:17.874660 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:57:17.874957 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:57:17.875141 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
	I0127 02:57:17.911317 1119007 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 02:57:17.912538 1119007 start.go:297] selected driver: kvm2
	I0127 02:57:17.912554 1119007 start.go:901] validating driver "kvm2" against &{Name:no-preload-887091 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-887091 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.201 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 02:57:17.912724 1119007 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 02:57:17.913732 1119007 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:57:17.913823 1119007 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20316-1057178/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 02:57:17.929134 1119007 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 02:57:17.929668 1119007 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 02:57:17.929707 1119007 cni.go:84] Creating CNI manager for ""
	I0127 02:57:17.929753 1119007 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 02:57:17.929790 1119007 start.go:340] cluster config:
	{Name:no-preload-887091 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-887091 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.201 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-hos
t Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 02:57:17.929898 1119007 iso.go:125] acquiring lock: {Name:mkd30bc9d11f9170e89ad95ce7ba25fa6d1e04f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:57:17.932066 1119007 out.go:177] * Starting "no-preload-887091" primary control-plane node in "no-preload-887091" cluster
	I0127 02:57:17.933218 1119007 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 02:57:17.933354 1119007 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/no-preload-887091/config.json ...
	I0127 02:57:17.933496 1119007 cache.go:107] acquiring lock: {Name:mkaf3b489bfd6dc421a2fa86abe9d65b6bff11ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:57:17.933498 1119007 cache.go:107] acquiring lock: {Name:mkf36fb3c7936dc43a7accf4d09084c009e59a41 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:57:17.933551 1119007 cache.go:107] acquiring lock: {Name:mkf165b974752458ff0611cfb9775fd80f2c97e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:57:17.933531 1119007 cache.go:107] acquiring lock: {Name:mkc9cd8f58fe1b37748c7212f0269bf025f162f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:57:17.933600 1119007 cache.go:115] /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0127 02:57:17.933612 1119007 cache.go:115] /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0127 02:57:17.933614 1119007 cache.go:115] /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 exists
	I0127 02:57:17.933500 1119007 cache.go:107] acquiring lock: {Name:mk60aac71096a73a7daed4ed978fcb744e76477d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:57:17.933624 1119007 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 144.173µs
	I0127 02:57:17.933633 1119007 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.1" -> "/home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1" took 83.685µs
	I0127 02:57:17.933642 1119007 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.1 -> /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 succeeded
	I0127 02:57:17.933644 1119007 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0127 02:57:17.933610 1119007 start.go:360] acquireMachinesLock for no-preload-887091: {Name:mka8dc154c517d64837d06e2f84f8bddd0b82c58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 02:57:17.933665 1119007 cache.go:115] /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 exists
	I0127 02:57:17.933671 1119007 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.1" -> "/home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1" took 186.712µs
	I0127 02:57:17.933676 1119007 start.go:364] duration metric: took 17.398µs to acquireMachinesLock for "no-preload-887091"
	I0127 02:57:17.933680 1119007 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.1 -> /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 succeeded
	I0127 02:57:17.933620 1119007 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 143.603µs
	I0127 02:57:17.933689 1119007 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0127 02:57:17.933693 1119007 start.go:96] Skipping create...Using existing machine configuration
	I0127 02:57:17.933689 1119007 cache.go:115] /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 exists
	I0127 02:57:17.933700 1119007 fix.go:54] fixHost starting: 
	I0127 02:57:17.933670 1119007 cache.go:107] acquiring lock: {Name:mk67516821ece3ab5011ba3de57f5e4304385ce1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:57:17.933705 1119007 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.1" -> "/home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1" took 209.742µs
	I0127 02:57:17.933723 1119007 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.1 -> /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 succeeded
	I0127 02:57:17.933729 1119007 cache.go:107] acquiring lock: {Name:mk8c8166121360e55636f1daf7b49e8ae0fd0b6d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:57:17.933775 1119007 cache.go:115] /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 exists
	I0127 02:57:17.933747 1119007 cache.go:107] acquiring lock: {Name:mk99ee89a947dcdbf6fe1f2b02e866da7649a3da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:57:17.933785 1119007 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.1" -> "/home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1" took 168.665µs
	I0127 02:57:17.933800 1119007 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.1 -> /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 succeeded
	I0127 02:57:17.933879 1119007 cache.go:115] /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 exists
	I0127 02:57:17.933898 1119007 cache.go:115] /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0127 02:57:17.933897 1119007 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0" took 221.731µs
	I0127 02:57:17.933910 1119007 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 209.265µs
	I0127 02:57:17.933918 1119007 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 succeeded
	I0127 02:57:17.933920 1119007 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0127 02:57:17.933928 1119007 cache.go:87] Successfully saved all images to host disk.
	I0127 02:57:17.934028 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 02:57:17.934063 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:57:17.949426 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37949
	I0127 02:57:17.949868 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:57:17.950384 1119007 main.go:141] libmachine: Using API Version  1
	I0127 02:57:17.950420 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:57:17.950776 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:57:17.951024 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
	I0127 02:57:17.951257 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetState
	I0127 02:57:17.953016 1119007 fix.go:112] recreateIfNeeded on no-preload-887091: state=Stopped err=<nil>
	I0127 02:57:17.953039 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
	W0127 02:57:17.953189 1119007 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 02:57:17.954893 1119007 out.go:177] * Restarting existing kvm2 VM for "no-preload-887091" ...
	I0127 02:57:17.956110 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Start
	I0127 02:57:17.956288 1119007 main.go:141] libmachine: (no-preload-887091) starting domain...
	I0127 02:57:17.956312 1119007 main.go:141] libmachine: (no-preload-887091) ensuring networks are active...
	I0127 02:57:17.956990 1119007 main.go:141] libmachine: (no-preload-887091) Ensuring network default is active
	I0127 02:57:17.957364 1119007 main.go:141] libmachine: (no-preload-887091) Ensuring network mk-no-preload-887091 is active
	I0127 02:57:17.957797 1119007 main.go:141] libmachine: (no-preload-887091) getting domain XML...
	I0127 02:57:17.958664 1119007 main.go:141] libmachine: (no-preload-887091) creating domain...
	I0127 02:57:19.173205 1119007 main.go:141] libmachine: (no-preload-887091) waiting for IP...
	I0127 02:57:19.174169 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:19.174658 1119007 main.go:141] libmachine: (no-preload-887091) DBG | unable to find current IP address of domain no-preload-887091 in network mk-no-preload-887091
	I0127 02:57:19.174730 1119007 main.go:141] libmachine: (no-preload-887091) DBG | I0127 02:57:19.174644 1119042 retry.go:31] will retry after 202.79074ms: waiting for domain to come up
	I0127 02:57:19.379134 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:19.379647 1119007 main.go:141] libmachine: (no-preload-887091) DBG | unable to find current IP address of domain no-preload-887091 in network mk-no-preload-887091
	I0127 02:57:19.379677 1119007 main.go:141] libmachine: (no-preload-887091) DBG | I0127 02:57:19.379625 1119042 retry.go:31] will retry after 302.512758ms: waiting for domain to come up
	I0127 02:57:19.684226 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:19.684853 1119007 main.go:141] libmachine: (no-preload-887091) DBG | unable to find current IP address of domain no-preload-887091 in network mk-no-preload-887091
	I0127 02:57:19.684883 1119007 main.go:141] libmachine: (no-preload-887091) DBG | I0127 02:57:19.684803 1119042 retry.go:31] will retry after 351.89473ms: waiting for domain to come up
	I0127 02:57:20.038122 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:20.038605 1119007 main.go:141] libmachine: (no-preload-887091) DBG | unable to find current IP address of domain no-preload-887091 in network mk-no-preload-887091
	I0127 02:57:20.038673 1119007 main.go:141] libmachine: (no-preload-887091) DBG | I0127 02:57:20.038579 1119042 retry.go:31] will retry after 476.247327ms: waiting for domain to come up
	I0127 02:57:20.516437 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:20.517032 1119007 main.go:141] libmachine: (no-preload-887091) DBG | unable to find current IP address of domain no-preload-887091 in network mk-no-preload-887091
	I0127 02:57:20.517067 1119007 main.go:141] libmachine: (no-preload-887091) DBG | I0127 02:57:20.516999 1119042 retry.go:31] will retry after 736.862022ms: waiting for domain to come up
	I0127 02:57:21.256068 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:21.256666 1119007 main.go:141] libmachine: (no-preload-887091) DBG | unable to find current IP address of domain no-preload-887091 in network mk-no-preload-887091
	I0127 02:57:21.256691 1119007 main.go:141] libmachine: (no-preload-887091) DBG | I0127 02:57:21.256633 1119042 retry.go:31] will retry after 716.788959ms: waiting for domain to come up
	I0127 02:57:21.975003 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:21.975580 1119007 main.go:141] libmachine: (no-preload-887091) DBG | unable to find current IP address of domain no-preload-887091 in network mk-no-preload-887091
	I0127 02:57:21.975612 1119007 main.go:141] libmachine: (no-preload-887091) DBG | I0127 02:57:21.975554 1119042 retry.go:31] will retry after 798.105294ms: waiting for domain to come up
	I0127 02:57:22.774811 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:22.775311 1119007 main.go:141] libmachine: (no-preload-887091) DBG | unable to find current IP address of domain no-preload-887091 in network mk-no-preload-887091
	I0127 02:57:22.775337 1119007 main.go:141] libmachine: (no-preload-887091) DBG | I0127 02:57:22.775283 1119042 retry.go:31] will retry after 1.275835327s: waiting for domain to come up
	I0127 02:57:24.052218 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:24.052768 1119007 main.go:141] libmachine: (no-preload-887091) DBG | unable to find current IP address of domain no-preload-887091 in network mk-no-preload-887091
	I0127 02:57:24.052804 1119007 main.go:141] libmachine: (no-preload-887091) DBG | I0127 02:57:24.052759 1119042 retry.go:31] will retry after 1.463923822s: waiting for domain to come up
	I0127 02:57:25.518368 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:25.518950 1119007 main.go:141] libmachine: (no-preload-887091) DBG | unable to find current IP address of domain no-preload-887091 in network mk-no-preload-887091
	I0127 02:57:25.518982 1119007 main.go:141] libmachine: (no-preload-887091) DBG | I0127 02:57:25.518889 1119042 retry.go:31] will retry after 1.710831863s: waiting for domain to come up
	I0127 02:57:27.231833 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:27.232414 1119007 main.go:141] libmachine: (no-preload-887091) DBG | unable to find current IP address of domain no-preload-887091 in network mk-no-preload-887091
	I0127 02:57:27.232450 1119007 main.go:141] libmachine: (no-preload-887091) DBG | I0127 02:57:27.232365 1119042 retry.go:31] will retry after 2.473402712s: waiting for domain to come up
	I0127 02:57:29.707356 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:29.708097 1119007 main.go:141] libmachine: (no-preload-887091) DBG | unable to find current IP address of domain no-preload-887091 in network mk-no-preload-887091
	I0127 02:57:29.708163 1119007 main.go:141] libmachine: (no-preload-887091) DBG | I0127 02:57:29.708083 1119042 retry.go:31] will retry after 2.914089375s: waiting for domain to come up
	I0127 02:57:32.623312 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:32.623781 1119007 main.go:141] libmachine: (no-preload-887091) DBG | unable to find current IP address of domain no-preload-887091 in network mk-no-preload-887091
	I0127 02:57:32.623811 1119007 main.go:141] libmachine: (no-preload-887091) DBG | I0127 02:57:32.623748 1119042 retry.go:31] will retry after 4.217598377s: waiting for domain to come up
	I0127 02:57:36.845771 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:36.846275 1119007 main.go:141] libmachine: (no-preload-887091) found domain IP: 192.168.61.201
	I0127 02:57:36.846317 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has current primary IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:36.846338 1119007 main.go:141] libmachine: (no-preload-887091) reserving static IP address...
	I0127 02:57:36.846916 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "no-preload-887091", mac: "52:54:00:32:f8:ff", ip: "192.168.61.201"} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
	I0127 02:57:36.846956 1119007 main.go:141] libmachine: (no-preload-887091) DBG | skip adding static IP to network mk-no-preload-887091 - found existing host DHCP lease matching {name: "no-preload-887091", mac: "52:54:00:32:f8:ff", ip: "192.168.61.201"}
	I0127 02:57:36.846976 1119007 main.go:141] libmachine: (no-preload-887091) reserved static IP address 192.168.61.201 for domain no-preload-887091
	I0127 02:57:36.846996 1119007 main.go:141] libmachine: (no-preload-887091) waiting for SSH...
	I0127 02:57:36.847014 1119007 main.go:141] libmachine: (no-preload-887091) DBG | Getting to WaitForSSH function...
	I0127 02:57:36.849363 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:36.849731 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
	I0127 02:57:36.849756 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:36.849913 1119007 main.go:141] libmachine: (no-preload-887091) DBG | Using SSH client type: external
	I0127 02:57:36.849946 1119007 main.go:141] libmachine: (no-preload-887091) DBG | Using SSH private key: /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/no-preload-887091/id_rsa (-rw-------)
	I0127 02:57:36.849967 1119007 main.go:141] libmachine: (no-preload-887091) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.201 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/no-preload-887091/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 02:57:36.849976 1119007 main.go:141] libmachine: (no-preload-887091) DBG | About to run SSH command:
	I0127 02:57:36.849988 1119007 main.go:141] libmachine: (no-preload-887091) DBG | exit 0
	I0127 02:57:36.973058 1119007 main.go:141] libmachine: (no-preload-887091) DBG | SSH cmd err, output: <nil>: 
	I0127 02:57:36.973490 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetConfigRaw
	I0127 02:57:36.974142 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetIP
	I0127 02:57:36.976736 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:36.977165 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
	I0127 02:57:36.977220 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:36.977400 1119007 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/no-preload-887091/config.json ...
	I0127 02:57:36.977631 1119007 machine.go:93] provisionDockerMachine start ...
	I0127 02:57:36.977653 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
	I0127 02:57:36.977876 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
	I0127 02:57:36.980076 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:36.980411 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
	I0127 02:57:36.980430 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:36.980567 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
	I0127 02:57:36.980763 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
	I0127 02:57:36.980915 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
	I0127 02:57:36.981066 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
	I0127 02:57:36.981246 1119007 main.go:141] libmachine: Using SSH client type: native
	I0127 02:57:36.981441 1119007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0127 02:57:36.981452 1119007 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 02:57:37.081339 1119007 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 02:57:37.081377 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetMachineName
	I0127 02:57:37.081658 1119007 buildroot.go:166] provisioning hostname "no-preload-887091"
	I0127 02:57:37.081691 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetMachineName
	I0127 02:57:37.081924 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
	I0127 02:57:37.084380 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:37.084725 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
	I0127 02:57:37.084753 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:37.084895 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
	I0127 02:57:37.085106 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
	I0127 02:57:37.085263 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
	I0127 02:57:37.085403 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
	I0127 02:57:37.085626 1119007 main.go:141] libmachine: Using SSH client type: native
	I0127 02:57:37.085814 1119007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0127 02:57:37.085825 1119007 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-887091 && echo "no-preload-887091" | sudo tee /etc/hostname
	I0127 02:57:37.195993 1119007 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-887091
	
	I0127 02:57:37.196030 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
	I0127 02:57:37.198721 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:37.199061 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
	I0127 02:57:37.199091 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:37.199222 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
	I0127 02:57:37.199398 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
	I0127 02:57:37.199587 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
	I0127 02:57:37.199679 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
	I0127 02:57:37.199831 1119007 main.go:141] libmachine: Using SSH client type: native
	I0127 02:57:37.200021 1119007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0127 02:57:37.200043 1119007 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-887091' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-887091/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-887091' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 02:57:37.306176 1119007 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 02:57:37.306207 1119007 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20316-1057178/.minikube CaCertPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20316-1057178/.minikube}
	I0127 02:57:37.306252 1119007 buildroot.go:174] setting up certificates
	I0127 02:57:37.306267 1119007 provision.go:84] configureAuth start
	I0127 02:57:37.306281 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetMachineName
	I0127 02:57:37.306596 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetIP
	I0127 02:57:37.309489 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:37.309825 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
	I0127 02:57:37.309865 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:37.310024 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
	I0127 02:57:37.311941 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:37.312264 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
	I0127 02:57:37.312297 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:37.312369 1119007 provision.go:143] copyHostCerts
	I0127 02:57:37.312444 1119007 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-1057178/.minikube/key.pem, removing ...
	I0127 02:57:37.312469 1119007 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-1057178/.minikube/key.pem
	I0127 02:57:37.312550 1119007 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20316-1057178/.minikube/key.pem (1675 bytes)
	I0127 02:57:37.312677 1119007 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.pem, removing ...
	I0127 02:57:37.312711 1119007 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.pem
	I0127 02:57:37.312762 1119007 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.pem (1078 bytes)
	I0127 02:57:37.312855 1119007 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-1057178/.minikube/cert.pem, removing ...
	I0127 02:57:37.312864 1119007 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-1057178/.minikube/cert.pem
	I0127 02:57:37.312913 1119007 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20316-1057178/.minikube/cert.pem (1123 bytes)
	I0127 02:57:37.313023 1119007 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca-key.pem org=jenkins.no-preload-887091 san=[127.0.0.1 192.168.61.201 localhost minikube no-preload-887091]
	I0127 02:57:37.408897 1119007 provision.go:177] copyRemoteCerts
	I0127 02:57:37.409030 1119007 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 02:57:37.409075 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
	I0127 02:57:37.411966 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:37.412302 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
	I0127 02:57:37.412330 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:37.412523 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
	I0127 02:57:37.412707 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
	I0127 02:57:37.412851 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
	I0127 02:57:37.412988 1119007 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/no-preload-887091/id_rsa Username:docker}
	I0127 02:57:37.491461 1119007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 02:57:37.516316 1119007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0127 02:57:37.541258 1119007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 02:57:37.565127 1119007 provision.go:87] duration metric: took 258.837723ms to configureAuth
	I0127 02:57:37.565182 1119007 buildroot.go:189] setting minikube options for container-runtime
	I0127 02:57:37.565398 1119007 config.go:182] Loaded profile config "no-preload-887091": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 02:57:37.565414 1119007 machine.go:96] duration metric: took 587.7693ms to provisionDockerMachine
	I0127 02:57:37.565427 1119007 start.go:293] postStartSetup for "no-preload-887091" (driver="kvm2")
	I0127 02:57:37.565455 1119007 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 02:57:37.565497 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
	I0127 02:57:37.565851 1119007 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 02:57:37.565883 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
	I0127 02:57:37.568521 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:37.568875 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
	I0127 02:57:37.568905 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:37.569059 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
	I0127 02:57:37.569248 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
	I0127 02:57:37.569384 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
	I0127 02:57:37.569520 1119007 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/no-preload-887091/id_rsa Username:docker}
	I0127 02:57:37.652194 1119007 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 02:57:37.656807 1119007 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 02:57:37.656825 1119007 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-1057178/.minikube/addons for local assets ...
	I0127 02:57:37.656879 1119007 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-1057178/.minikube/files for local assets ...
	I0127 02:57:37.656966 1119007 filesync.go:149] local asset: /home/jenkins/minikube-integration/20316-1057178/.minikube/files/etc/ssl/certs/10644392.pem -> 10644392.pem in /etc/ssl/certs
	I0127 02:57:37.657060 1119007 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 02:57:37.666921 1119007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/files/etc/ssl/certs/10644392.pem --> /etc/ssl/certs/10644392.pem (1708 bytes)
	I0127 02:57:37.694813 1119007 start.go:296] duration metric: took 129.36665ms for postStartSetup
	I0127 02:57:37.694863 1119007 fix.go:56] duration metric: took 19.761162878s for fixHost
	I0127 02:57:37.694911 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
	I0127 02:57:37.697378 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:37.697699 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
	I0127 02:57:37.697728 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:37.697917 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
	I0127 02:57:37.698109 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
	I0127 02:57:37.698223 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
	I0127 02:57:37.698342 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
	I0127 02:57:37.698490 1119007 main.go:141] libmachine: Using SSH client type: native
	I0127 02:57:37.698659 1119007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0127 02:57:37.698669 1119007 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 02:57:37.797890 1119007 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737946657.773139085
	
	I0127 02:57:37.797917 1119007 fix.go:216] guest clock: 1737946657.773139085
	I0127 02:57:37.797927 1119007 fix.go:229] Guest: 2025-01-27 02:57:37.773139085 +0000 UTC Remote: 2025-01-27 02:57:37.694887778 +0000 UTC m=+19.907510259 (delta=78.251307ms)
	I0127 02:57:37.797955 1119007 fix.go:200] guest clock delta is within tolerance: 78.251307ms
	I0127 02:57:37.797962 1119007 start.go:83] releasing machines lock for "no-preload-887091", held for 19.864277332s
	I0127 02:57:37.797987 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
	I0127 02:57:37.798292 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetIP
	I0127 02:57:37.801179 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:37.801603 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
	I0127 02:57:37.801655 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:37.801775 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
	I0127 02:57:37.802406 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
	I0127 02:57:37.802577 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
	I0127 02:57:37.802685 1119007 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 02:57:37.802729 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
	I0127 02:57:37.802779 1119007 ssh_runner.go:195] Run: cat /version.json
	I0127 02:57:37.802806 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
	I0127 02:57:37.805280 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:37.805651 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
	I0127 02:57:37.805679 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:37.805707 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:37.805807 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
	I0127 02:57:37.806008 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
	I0127 02:57:37.806169 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
	I0127 02:57:37.806218 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
	I0127 02:57:37.806248 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:37.806312 1119007 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/no-preload-887091/id_rsa Username:docker}
	I0127 02:57:37.806416 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
	I0127 02:57:37.806569 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
	I0127 02:57:37.806739 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
	I0127 02:57:37.806908 1119007 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/no-preload-887091/id_rsa Username:docker}
	I0127 02:57:37.905701 1119007 ssh_runner.go:195] Run: systemctl --version
	I0127 02:57:37.912321 1119007 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 02:57:37.918374 1119007 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 02:57:37.918461 1119007 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 02:57:37.935436 1119007 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 02:57:37.935460 1119007 start.go:495] detecting cgroup driver to use...
	I0127 02:57:37.935528 1119007 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 02:57:37.966093 1119007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 02:57:37.981853 1119007 docker.go:217] disabling cri-docker service (if available) ...
	I0127 02:57:37.981927 1119007 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 02:57:37.996166 1119007 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 02:57:38.010386 1119007 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 02:57:38.147866 1119007 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 02:57:38.297821 1119007 docker.go:233] disabling docker service ...
	I0127 02:57:38.297892 1119007 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 02:57:38.315550 1119007 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 02:57:38.330634 1119007 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 02:57:38.468074 1119007 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 02:57:38.586611 1119007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 02:57:38.601731 1119007 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 02:57:38.624294 1119007 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 02:57:38.635465 1119007 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 02:57:38.646317 1119007 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 02:57:38.646407 1119007 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 02:57:38.656764 1119007 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 02:57:38.667294 1119007 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 02:57:38.677687 1119007 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 02:57:38.688025 1119007 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 02:57:38.698919 1119007 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 02:57:38.709435 1119007 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 02:57:38.719630 1119007 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 02:57:38.730310 1119007 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 02:57:38.739553 1119007 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 02:57:38.739618 1119007 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 02:57:38.752608 1119007 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 02:57:38.762650 1119007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 02:57:38.877193 1119007 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 02:57:38.909183 1119007 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0127 02:57:38.909305 1119007 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 02:57:38.914225 1119007 retry.go:31] will retry after 794.922269ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0127 02:57:39.710334 1119007 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 02:57:39.716338 1119007 start.go:563] Will wait 60s for crictl version
	I0127 02:57:39.716396 1119007 ssh_runner.go:195] Run: which crictl
	I0127 02:57:39.720744 1119007 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 02:57:39.766069 1119007 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0127 02:57:39.766130 1119007 ssh_runner.go:195] Run: containerd --version
	I0127 02:57:39.796216 1119007 ssh_runner.go:195] Run: containerd --version
	I0127 02:57:39.823008 1119007 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	I0127 02:57:39.824419 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetIP
	I0127 02:57:39.827434 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:39.827849 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
	I0127 02:57:39.827880 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 02:57:39.828134 1119007 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0127 02:57:39.832991 1119007 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 02:57:39.851687 1119007 kubeadm.go:883] updating cluster {Name:no-preload-887091 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-887091 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.201 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 02:57:39.851862 1119007 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 02:57:39.851922 1119007 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 02:57:39.888199 1119007 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 02:57:39.888237 1119007 cache_images.go:84] Images are preloaded, skipping loading
	I0127 02:57:39.888246 1119007 kubeadm.go:934] updating node { 192.168.61.201 8443 v1.32.1 containerd true true} ...
	I0127 02:57:39.888357 1119007 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-887091 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.201
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:no-preload-887091 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 02:57:39.888413 1119007 ssh_runner.go:195] Run: sudo crictl info
	I0127 02:57:39.925368 1119007 cni.go:84] Creating CNI manager for ""
	I0127 02:57:39.925404 1119007 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 02:57:39.925417 1119007 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 02:57:39.925447 1119007 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.201 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-887091 NodeName:no-preload-887091 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.201"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.201 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 02:57:39.925650 1119007 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.201
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-887091"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.201"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.201"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 02:57:39.925742 1119007 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 02:57:39.942833 1119007 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 02:57:39.942902 1119007 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 02:57:39.953967 1119007 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0127 02:57:39.975996 1119007 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 02:57:39.998062 1119007 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2313 bytes)
	I0127 02:57:40.018697 1119007 ssh_runner.go:195] Run: grep 192.168.61.201	control-plane.minikube.internal$ /etc/hosts
	I0127 02:57:40.022738 1119007 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.201	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 02:57:40.037382 1119007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 02:57:40.145744 1119007 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 02:57:40.164872 1119007 certs.go:68] Setting up /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/no-preload-887091 for IP: 192.168.61.201
	I0127 02:57:40.164902 1119007 certs.go:194] generating shared ca certs ...
	I0127 02:57:40.164925 1119007 certs.go:226] acquiring lock for ca certs: {Name:mk567acc23cbe907605c03a2ec03c8e4859e8343 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:57:40.165163 1119007 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.key
	I0127 02:57:40.165232 1119007 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/proxy-client-ca.key
	I0127 02:57:40.165247 1119007 certs.go:256] generating profile certs ...
	I0127 02:57:40.165476 1119007 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/no-preload-887091/client.key
	I0127 02:57:40.165563 1119007 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/no-preload-887091/apiserver.key.aacd82e8
	I0127 02:57:40.165631 1119007 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/no-preload-887091/proxy-client.key
	I0127 02:57:40.165784 1119007 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/1064439.pem (1338 bytes)
	W0127 02:57:40.165824 1119007 certs.go:480] ignoring /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/1064439_empty.pem, impossibly tiny 0 bytes
	I0127 02:57:40.165835 1119007 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 02:57:40.165856 1119007 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem (1078 bytes)
	I0127 02:57:40.165879 1119007 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/cert.pem (1123 bytes)
	I0127 02:57:40.165900 1119007 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/key.pem (1675 bytes)
	I0127 02:57:40.165947 1119007 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/files/etc/ssl/certs/10644392.pem (1708 bytes)
	I0127 02:57:40.166801 1119007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 02:57:40.205043 1119007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 02:57:40.233653 1119007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 02:57:40.263194 1119007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 02:57:40.300032 1119007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/no-preload-887091/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0127 02:57:40.328591 1119007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/no-preload-887091/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 02:57:40.365362 1119007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/no-preload-887091/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 02:57:40.394991 1119007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/no-preload-887091/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 02:57:40.426137 1119007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/1064439.pem --> /usr/share/ca-certificates/1064439.pem (1338 bytes)
	I0127 02:57:40.453968 1119007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/files/etc/ssl/certs/10644392.pem --> /usr/share/ca-certificates/10644392.pem (1708 bytes)
	I0127 02:57:40.478752 1119007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 02:57:40.503851 1119007 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 02:57:40.523274 1119007 ssh_runner.go:195] Run: openssl version
	I0127 02:57:40.529744 1119007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1064439.pem && ln -fs /usr/share/ca-certificates/1064439.pem /etc/ssl/certs/1064439.pem"
	I0127 02:57:40.543427 1119007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1064439.pem
	I0127 02:57:40.548863 1119007 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 01:51 /usr/share/ca-certificates/1064439.pem
	I0127 02:57:40.548932 1119007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1064439.pem
	I0127 02:57:40.555890 1119007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1064439.pem /etc/ssl/certs/51391683.0"
	I0127 02:57:40.567770 1119007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10644392.pem && ln -fs /usr/share/ca-certificates/10644392.pem /etc/ssl/certs/10644392.pem"
	I0127 02:57:40.579663 1119007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10644392.pem
	I0127 02:57:40.584502 1119007 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 01:51 /usr/share/ca-certificates/10644392.pem
	I0127 02:57:40.584560 1119007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10644392.pem
	I0127 02:57:40.590675 1119007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10644392.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 02:57:40.602765 1119007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 02:57:40.614990 1119007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:57:40.620008 1119007 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 01:43 /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:57:40.620066 1119007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:57:40.626331 1119007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 02:57:40.638159 1119007 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 02:57:40.642982 1119007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 02:57:40.649025 1119007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 02:57:40.655003 1119007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 02:57:40.661855 1119007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 02:57:40.668260 1119007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 02:57:40.674724 1119007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 02:57:40.681057 1119007 kubeadm.go:392] StartCluster: {Name:no-preload-887091 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-887091 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.201 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 02:57:40.681181 1119007 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0127 02:57:40.681238 1119007 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 02:57:40.730453 1119007 cri.go:89] found id: "90588c3ce0b8a9dbaf560bcb262f23194dd933c68b927fd7c5e20be35de69201"
	I0127 02:57:40.730481 1119007 cri.go:89] found id: "3e5949f2b39e4ab9da5b70635fa1895d21df41ff4db6ea86abdd193c100aa8a7"
	I0127 02:57:40.730486 1119007 cri.go:89] found id: "7cdb346acd2e0edae08bea0014e70359ae7bb2671a69f291e9b91c63f040e324"
	I0127 02:57:40.730497 1119007 cri.go:89] found id: "9eddd5efbbd2d70ea5381943c743568f31bbfcffc12b83bec443d7dd34d43c9a"
	I0127 02:57:40.730500 1119007 cri.go:89] found id: "4d92a1eac3fbc99ef3fc12923dd53feb32ddaeee2f883fba7011662b2b3f3810"
	I0127 02:57:40.730505 1119007 cri.go:89] found id: "0763fefc30ad1b620c709c9e4ed03bf2898f401c29b68422d3513ffeb849aa91"
	I0127 02:57:40.730509 1119007 cri.go:89] found id: "e2050524cabd927c990edac7abbdb29dba13769b611193b5df791f54e67e0b9d"
	I0127 02:57:40.730513 1119007 cri.go:89] found id: "c5ecfdbe22b7f95ac852a8c856e9c6e0cd678ffb3188180ee85d6af384e9a017"
	I0127 02:57:40.730517 1119007 cri.go:89] found id: ""
	I0127 02:57:40.730584 1119007 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0127 02:57:40.746631 1119007 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-27T02:57:40Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0127 02:57:40.746770 1119007 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 02:57:40.757045 1119007 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 02:57:40.757070 1119007 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 02:57:40.757118 1119007 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 02:57:40.767762 1119007 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 02:57:40.768602 1119007 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-887091" does not appear in /home/jenkins/minikube-integration/20316-1057178/kubeconfig
	I0127 02:57:40.769144 1119007 kubeconfig.go:62] /home/jenkins/minikube-integration/20316-1057178/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-887091" cluster setting kubeconfig missing "no-preload-887091" context setting]
	I0127 02:57:40.769852 1119007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-1057178/kubeconfig: {Name:mke4bd9fc891569e5d6830fdf173fa5043f6c0a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:57:40.771437 1119007 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 02:57:40.784688 1119007 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.201
	I0127 02:57:40.784725 1119007 kubeadm.go:1160] stopping kube-system containers ...
	I0127 02:57:40.784740 1119007 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0127 02:57:40.784842 1119007 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 02:57:40.826025 1119007 cri.go:89] found id: "90588c3ce0b8a9dbaf560bcb262f23194dd933c68b927fd7c5e20be35de69201"
	I0127 02:57:40.826050 1119007 cri.go:89] found id: "3e5949f2b39e4ab9da5b70635fa1895d21df41ff4db6ea86abdd193c100aa8a7"
	I0127 02:57:40.826055 1119007 cri.go:89] found id: "7cdb346acd2e0edae08bea0014e70359ae7bb2671a69f291e9b91c63f040e324"
	I0127 02:57:40.826077 1119007 cri.go:89] found id: "9eddd5efbbd2d70ea5381943c743568f31bbfcffc12b83bec443d7dd34d43c9a"
	I0127 02:57:40.826082 1119007 cri.go:89] found id: "4d92a1eac3fbc99ef3fc12923dd53feb32ddaeee2f883fba7011662b2b3f3810"
	I0127 02:57:40.826087 1119007 cri.go:89] found id: "0763fefc30ad1b620c709c9e4ed03bf2898f401c29b68422d3513ffeb849aa91"
	I0127 02:57:40.826091 1119007 cri.go:89] found id: "e2050524cabd927c990edac7abbdb29dba13769b611193b5df791f54e67e0b9d"
	I0127 02:57:40.826096 1119007 cri.go:89] found id: "c5ecfdbe22b7f95ac852a8c856e9c6e0cd678ffb3188180ee85d6af384e9a017"
	I0127 02:57:40.826100 1119007 cri.go:89] found id: ""
	I0127 02:57:40.826107 1119007 cri.go:252] Stopping containers: [90588c3ce0b8a9dbaf560bcb262f23194dd933c68b927fd7c5e20be35de69201 3e5949f2b39e4ab9da5b70635fa1895d21df41ff4db6ea86abdd193c100aa8a7 7cdb346acd2e0edae08bea0014e70359ae7bb2671a69f291e9b91c63f040e324 9eddd5efbbd2d70ea5381943c743568f31bbfcffc12b83bec443d7dd34d43c9a 4d92a1eac3fbc99ef3fc12923dd53feb32ddaeee2f883fba7011662b2b3f3810 0763fefc30ad1b620c709c9e4ed03bf2898f401c29b68422d3513ffeb849aa91 e2050524cabd927c990edac7abbdb29dba13769b611193b5df791f54e67e0b9d c5ecfdbe22b7f95ac852a8c856e9c6e0cd678ffb3188180ee85d6af384e9a017]
	I0127 02:57:40.826175 1119007 ssh_runner.go:195] Run: which crictl
	I0127 02:57:40.830410 1119007 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 90588c3ce0b8a9dbaf560bcb262f23194dd933c68b927fd7c5e20be35de69201 3e5949f2b39e4ab9da5b70635fa1895d21df41ff4db6ea86abdd193c100aa8a7 7cdb346acd2e0edae08bea0014e70359ae7bb2671a69f291e9b91c63f040e324 9eddd5efbbd2d70ea5381943c743568f31bbfcffc12b83bec443d7dd34d43c9a 4d92a1eac3fbc99ef3fc12923dd53feb32ddaeee2f883fba7011662b2b3f3810 0763fefc30ad1b620c709c9e4ed03bf2898f401c29b68422d3513ffeb849aa91 e2050524cabd927c990edac7abbdb29dba13769b611193b5df791f54e67e0b9d c5ecfdbe22b7f95ac852a8c856e9c6e0cd678ffb3188180ee85d6af384e9a017
	I0127 02:57:40.882866 1119007 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 02:57:40.899075 1119007 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 02:57:40.910270 1119007 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 02:57:40.910298 1119007 kubeadm.go:157] found existing configuration files:
	
	I0127 02:57:40.910362 1119007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 02:57:40.919483 1119007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 02:57:40.919535 1119007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 02:57:40.928981 1119007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 02:57:40.938758 1119007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 02:57:40.938833 1119007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 02:57:40.952460 1119007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 02:57:40.962955 1119007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 02:57:40.963025 1119007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 02:57:40.973872 1119007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 02:57:40.983205 1119007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 02:57:40.983280 1119007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 02:57:40.993991 1119007 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 02:57:41.004968 1119007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:57:41.152772 1119007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:57:42.177753 1119007 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.024931478s)
	I0127 02:57:42.177800 1119007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:57:42.417533 1119007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:57:42.511014 1119007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:57:42.594177 1119007 api_server.go:52] waiting for apiserver process to appear ...
	I0127 02:57:42.594282 1119007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 02:57:43.095370 1119007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 02:57:43.594987 1119007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 02:57:44.095250 1119007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 02:57:44.113039 1119007 api_server.go:72] duration metric: took 1.518862074s to wait for apiserver process to appear ...
	I0127 02:57:44.113072 1119007 api_server.go:88] waiting for apiserver healthz status ...
	I0127 02:57:44.113103 1119007 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0127 02:57:46.518925 1119007 api_server.go:279] https://192.168.61.201:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 02:57:46.518959 1119007 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 02:57:46.518979 1119007 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0127 02:57:46.540719 1119007 api_server.go:279] https://192.168.61.201:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 02:57:46.540755 1119007 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 02:57:46.614126 1119007 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0127 02:57:46.628902 1119007 api_server.go:279] https://192.168.61.201:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 02:57:46.628971 1119007 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 02:57:47.113363 1119007 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0127 02:57:47.125469 1119007 api_server.go:279] https://192.168.61.201:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 02:57:47.125511 1119007 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 02:57:47.613179 1119007 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0127 02:57:47.618904 1119007 api_server.go:279] https://192.168.61.201:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 02:57:47.618939 1119007 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 02:57:48.113537 1119007 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0127 02:57:48.118110 1119007 api_server.go:279] https://192.168.61.201:8443/healthz returned 200:
	ok
	I0127 02:57:48.125708 1119007 api_server.go:141] control plane version: v1.32.1
	I0127 02:57:48.125745 1119007 api_server.go:131] duration metric: took 4.012658353s to wait for apiserver health ...
	I0127 02:57:48.125759 1119007 cni.go:84] Creating CNI manager for ""
	I0127 02:57:48.125768 1119007 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 02:57:48.127566 1119007 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 02:57:48.128782 1119007 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 02:57:48.140489 1119007 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 02:57:48.162813 1119007 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 02:57:48.176319 1119007 system_pods.go:59] 8 kube-system pods found
	I0127 02:57:48.176370 1119007 system_pods.go:61] "coredns-668d6bf9bc-qkz5q" [f8f92df8-ef36-49b9-bb22-a88ab7906ac5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 02:57:48.176383 1119007 system_pods.go:61] "etcd-no-preload-887091" [be14b789-0033-4668-89a8-79a123455ba3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 02:57:48.176398 1119007 system_pods.go:61] "kube-apiserver-no-preload-887091" [cf42ffe7-87d3-4474-aff6-d86557db813d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 02:57:48.176412 1119007 system_pods.go:61] "kube-controller-manager-no-preload-887091" [d81a3345-0b6b-4650-9dba-0e4b0828728d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 02:57:48.176425 1119007 system_pods.go:61] "kube-proxy-rb9xh" [2dd0f353-2a59-4ee0-95d3-57bb062e90fd] Running
	I0127 02:57:48.176438 1119007 system_pods.go:61] "kube-scheduler-no-preload-887091" [5a067209-1bbd-434c-b992-5ba08777bd64] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 02:57:48.176448 1119007 system_pods.go:61] "metrics-server-f79f97bbb-z5lnh" [73883cee-23b2-4bd3-bfa1-99fc13c10251] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 02:57:48.176457 1119007 system_pods.go:61] "storage-provisioner" [70aaa8f6-8792-4c89-9ef2-3a774e7ffc28] Running
	I0127 02:57:48.176468 1119007 system_pods.go:74] duration metric: took 13.627705ms to wait for pod list to return data ...
	I0127 02:57:48.176481 1119007 node_conditions.go:102] verifying NodePressure condition ...
	I0127 02:57:48.181230 1119007 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 02:57:48.181258 1119007 node_conditions.go:123] node cpu capacity is 2
	I0127 02:57:48.181270 1119007 node_conditions.go:105] duration metric: took 4.781166ms to run NodePressure ...
	I0127 02:57:48.181287 1119007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:57:48.478593 1119007 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0127 02:57:48.484351 1119007 kubeadm.go:739] kubelet initialised
	I0127 02:57:48.484381 1119007 kubeadm.go:740] duration metric: took 5.757501ms waiting for restarted kubelet to initialise ...
	I0127 02:57:48.484394 1119007 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 02:57:48.490047 1119007 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-qkz5q" in "kube-system" namespace to be "Ready" ...
	I0127 02:57:50.497200 1119007 pod_ready.go:103] pod "coredns-668d6bf9bc-qkz5q" in "kube-system" namespace has status "Ready":"False"
	I0127 02:57:51.998991 1119007 pod_ready.go:93] pod "coredns-668d6bf9bc-qkz5q" in "kube-system" namespace has status "Ready":"True"
	I0127 02:57:51.999023 1119007 pod_ready.go:82] duration metric: took 3.50894667s for pod "coredns-668d6bf9bc-qkz5q" in "kube-system" namespace to be "Ready" ...
	I0127 02:57:51.999034 1119007 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 02:57:54.007466 1119007 pod_ready.go:103] pod "etcd-no-preload-887091" in "kube-system" namespace has status "Ready":"False"
	I0127 02:57:56.505790 1119007 pod_ready.go:103] pod "etcd-no-preload-887091" in "kube-system" namespace has status "Ready":"False"
	I0127 02:57:58.508569 1119007 pod_ready.go:103] pod "etcd-no-preload-887091" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:01.006571 1119007 pod_ready.go:93] pod "etcd-no-preload-887091" in "kube-system" namespace has status "Ready":"True"
	I0127 02:58:01.006605 1119007 pod_ready.go:82] duration metric: took 9.007562594s for pod "etcd-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:01.006620 1119007 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:01.013182 1119007 pod_ready.go:93] pod "kube-apiserver-no-preload-887091" in "kube-system" namespace has status "Ready":"True"
	I0127 02:58:01.013223 1119007 pod_ready.go:82] duration metric: took 6.590337ms for pod "kube-apiserver-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:01.013238 1119007 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:01.018376 1119007 pod_ready.go:93] pod "kube-controller-manager-no-preload-887091" in "kube-system" namespace has status "Ready":"True"
	I0127 02:58:01.018403 1119007 pod_ready.go:82] duration metric: took 5.157185ms for pod "kube-controller-manager-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:01.018418 1119007 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rb9xh" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:01.023448 1119007 pod_ready.go:93] pod "kube-proxy-rb9xh" in "kube-system" namespace has status "Ready":"True"
	I0127 02:58:01.023473 1119007 pod_ready.go:82] duration metric: took 5.046305ms for pod "kube-proxy-rb9xh" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:01.023486 1119007 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:01.028930 1119007 pod_ready.go:93] pod "kube-scheduler-no-preload-887091" in "kube-system" namespace has status "Ready":"True"
	I0127 02:58:01.028971 1119007 pod_ready.go:82] duration metric: took 5.475315ms for pod "kube-scheduler-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:01.028989 1119007 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:03.036089 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:05.536328 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:07.536727 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:10.036861 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:12.535610 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:15.035999 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:17.037187 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:19.038847 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:21.536348 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:24.036301 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:26.040195 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:28.040739 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:30.537378 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:33.035642 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:35.037249 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:37.536224 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:40.037082 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:42.038813 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:44.535198 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:46.535680 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:48.536326 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:50.537376 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:53.035927 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:55.536437 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:57.537110 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:00.038394 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:02.536771 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:05.038185 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:07.536177 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:09.537029 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:11.537757 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:14.037470 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:16.536465 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:19.037156 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:21.536456 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:23.536645 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:26.035836 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:28.279411 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:30.536761 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:32.537456 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:35.039986 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:37.536688 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:40.037732 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:42.537928 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:45.037622 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:47.535790 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:49.536337 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:52.037459 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:54.540462 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:56.543579 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:59.036536 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:01.535350 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:03.536165 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:06.037041 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:08.535898 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:10.536209 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:13.036599 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:15.536079 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:17.536247 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:20.036743 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:22.037936 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:24.536008 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:26.536423 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:28.536964 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:31.036263 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:33.040054 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:35.536750 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:37.537599 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:40.037026 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:42.535068 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:44.535800 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:46.536400 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:48.536806 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:50.536883 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:53.036700 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:55.536261 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:57.538026 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:00.037107 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:02.536760 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:05.036562 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:07.037686 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:09.536381 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:12.036975 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:14.037371 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:16.038039 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:18.536740 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:21.034869 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:23.035750 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:25.536208 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:28.046605 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:30.538073 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:33.036281 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:35.038144 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:37.538369 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:40.038370 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:42.537020 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:45.037268 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:47.037856 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:49.537112 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:52.036723 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:54.536260 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:57.037759 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:59.553947 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:01.029438 1119007 pod_ready.go:82] duration metric: took 4m0.000430308s for pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace to be "Ready" ...
	E0127 03:02:01.029463 1119007 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 03:02:01.029492 1119007 pod_ready.go:39] duration metric: took 4m12.545085543s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:02:01.029521 1119007 kubeadm.go:597] duration metric: took 4m20.2724454s to restartPrimaryControlPlane
	W0127 03:02:01.029578 1119007 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 03:02:01.029603 1119007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 03:02:03.004910 1119007 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.9752757s)
	I0127 03:02:03.005026 1119007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 03:02:03.022327 1119007 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 03:02:03.033433 1119007 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 03:02:03.043716 1119007 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 03:02:03.043751 1119007 kubeadm.go:157] found existing configuration files:
	
	I0127 03:02:03.043807 1119007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 03:02:03.053848 1119007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 03:02:03.053913 1119007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 03:02:03.064618 1119007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 03:02:03.075259 1119007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 03:02:03.075327 1119007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 03:02:03.087088 1119007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 03:02:03.098909 1119007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 03:02:03.098975 1119007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 03:02:03.110053 1119007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 03:02:03.119864 1119007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 03:02:03.119938 1119007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 03:02:03.130987 1119007 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 03:02:03.185348 1119007 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 03:02:03.185417 1119007 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 03:02:03.314698 1119007 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 03:02:03.314881 1119007 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 03:02:03.315043 1119007 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 03:02:03.324401 1119007 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 03:02:03.326164 1119007 out.go:235]   - Generating certificates and keys ...
	I0127 03:02:03.326268 1119007 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 03:02:03.326359 1119007 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 03:02:03.326477 1119007 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 03:02:03.326572 1119007 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 03:02:03.326663 1119007 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 03:02:03.326738 1119007 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 03:02:03.326859 1119007 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 03:02:03.327073 1119007 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 03:02:03.327208 1119007 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 03:02:03.327338 1119007 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 03:02:03.327408 1119007 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 03:02:03.327502 1119007 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 03:02:03.521123 1119007 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 03:02:03.756848 1119007 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 03:02:03.911089 1119007 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 03:02:04.122010 1119007 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 03:02:04.383085 1119007 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 03:02:04.383614 1119007 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 03:02:04.386205 1119007 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 03:02:04.388044 1119007 out.go:235]   - Booting up control plane ...
	I0127 03:02:04.388157 1119007 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 03:02:04.388265 1119007 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 03:02:04.388373 1119007 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 03:02:04.409379 1119007 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 03:02:04.416389 1119007 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 03:02:04.416479 1119007 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 03:02:04.571487 1119007 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 03:02:04.571690 1119007 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 03:02:05.072916 1119007 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.574288ms
	I0127 03:02:05.073090 1119007 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 03:02:10.574512 1119007 kubeadm.go:310] [api-check] The API server is healthy after 5.501444049s
	I0127 03:02:10.590265 1119007 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 03:02:10.612200 1119007 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 03:02:10.650305 1119007 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 03:02:10.650585 1119007 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-887091 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 03:02:10.661688 1119007 kubeadm.go:310] [bootstrap-token] Using token: 25alvo.7xrmg7nh4q5v903n
	I0127 03:02:10.663119 1119007 out.go:235]   - Configuring RBAC rules ...
	I0127 03:02:10.663280 1119007 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 03:02:10.671888 1119007 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 03:02:10.685310 1119007 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 03:02:10.690214 1119007 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 03:02:10.694363 1119007 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 03:02:10.698959 1119007 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 03:02:10.982964 1119007 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 03:02:11.430752 1119007 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 03:02:11.982446 1119007 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 03:02:11.984681 1119007 kubeadm.go:310] 
	I0127 03:02:11.984836 1119007 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 03:02:11.984859 1119007 kubeadm.go:310] 
	I0127 03:02:11.984989 1119007 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 03:02:11.985010 1119007 kubeadm.go:310] 
	I0127 03:02:11.985048 1119007 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 03:02:11.985139 1119007 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 03:02:11.985214 1119007 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 03:02:11.985223 1119007 kubeadm.go:310] 
	I0127 03:02:11.985308 1119007 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 03:02:11.985320 1119007 kubeadm.go:310] 
	I0127 03:02:11.985386 1119007 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 03:02:11.985394 1119007 kubeadm.go:310] 
	I0127 03:02:11.985466 1119007 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 03:02:11.985573 1119007 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 03:02:11.985666 1119007 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 03:02:11.985676 1119007 kubeadm.go:310] 
	I0127 03:02:11.985787 1119007 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 03:02:11.985893 1119007 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 03:02:11.985903 1119007 kubeadm.go:310] 
	I0127 03:02:11.986015 1119007 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 25alvo.7xrmg7nh4q5v903n \
	I0127 03:02:11.986154 1119007 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0bba8d4e4b3afb129d2d18e2e045cd48b3419c300ae73ce15b73c31a6c21b1ba \
	I0127 03:02:11.986187 1119007 kubeadm.go:310] 	--control-plane 
	I0127 03:02:11.986194 1119007 kubeadm.go:310] 
	I0127 03:02:11.986302 1119007 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 03:02:11.986313 1119007 kubeadm.go:310] 
	I0127 03:02:11.986421 1119007 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 25alvo.7xrmg7nh4q5v903n \
	I0127 03:02:11.986559 1119007 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0bba8d4e4b3afb129d2d18e2e045cd48b3419c300ae73ce15b73c31a6c21b1ba 
	I0127 03:02:11.988046 1119007 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 03:02:11.988085 1119007 cni.go:84] Creating CNI manager for ""
	I0127 03:02:11.988096 1119007 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 03:02:11.989984 1119007 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 03:02:11.991565 1119007 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 03:02:12.008152 1119007 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 03:02:12.031285 1119007 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 03:02:12.031368 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:12.031415 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-887091 minikube.k8s.io/updated_at=2025_01_27T03_02_12_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95 minikube.k8s.io/name=no-preload-887091 minikube.k8s.io/primary=true
	I0127 03:02:12.301916 1119007 ops.go:34] apiserver oom_adj: -16
	I0127 03:02:12.302079 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:12.802985 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:13.302566 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:13.802370 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:14.302582 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:14.802350 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:15.302355 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:15.802132 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:15.926758 1119007 kubeadm.go:1113] duration metric: took 3.895467932s to wait for elevateKubeSystemPrivileges
	I0127 03:02:15.926808 1119007 kubeadm.go:394] duration metric: took 4m35.245756492s to StartCluster
	I0127 03:02:15.926834 1119007 settings.go:142] acquiring lock: {Name:mkfac79776d8549aa482287d1af528efdec15d32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:15.926944 1119007 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20316-1057178/kubeconfig
	I0127 03:02:15.928428 1119007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-1057178/kubeconfig: {Name:mke4bd9fc891569e5d6830fdf173fa5043f6c0a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:15.928677 1119007 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.201 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 03:02:15.928795 1119007 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 03:02:15.928913 1119007 config.go:182] Loaded profile config "no-preload-887091": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 03:02:15.928932 1119007 addons.go:69] Setting metrics-server=true in profile "no-preload-887091"
	I0127 03:02:15.928966 1119007 addons.go:238] Setting addon metrics-server=true in "no-preload-887091"
	I0127 03:02:15.928977 1119007 addons.go:69] Setting dashboard=true in profile "no-preload-887091"
	W0127 03:02:15.928985 1119007 addons.go:247] addon metrics-server should already be in state true
	I0127 03:02:15.928991 1119007 addons.go:238] Setting addon dashboard=true in "no-preload-887091"
	I0127 03:02:15.928918 1119007 addons.go:69] Setting storage-provisioner=true in profile "no-preload-887091"
	I0127 03:02:15.929020 1119007 host.go:66] Checking if "no-preload-887091" exists ...
	I0127 03:02:15.929025 1119007 addons.go:238] Setting addon storage-provisioner=true in "no-preload-887091"
	W0127 03:02:15.929036 1119007 addons.go:247] addon storage-provisioner should already be in state true
	I0127 03:02:15.928961 1119007 addons.go:69] Setting default-storageclass=true in profile "no-preload-887091"
	I0127 03:02:15.929073 1119007 host.go:66] Checking if "no-preload-887091" exists ...
	I0127 03:02:15.929093 1119007 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-887091"
	W0127 03:02:15.928999 1119007 addons.go:247] addon dashboard should already be in state true
	I0127 03:02:15.929175 1119007 host.go:66] Checking if "no-preload-887091" exists ...
	I0127 03:02:15.929496 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:15.929496 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:15.929544 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:15.929557 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:15.929547 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:15.929584 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:15.929499 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:15.929692 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:15.930306 1119007 out.go:177] * Verifying Kubernetes components...
	I0127 03:02:15.931877 1119007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:02:15.952533 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34429
	I0127 03:02:15.952549 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44595
	I0127 03:02:15.952581 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46601
	I0127 03:02:15.952721 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39019
	I0127 03:02:15.954529 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:15.954547 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:15.954808 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:15.955205 1119007 main.go:141] libmachine: Using API Version  1
	I0127 03:02:15.955229 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:15.955233 1119007 main.go:141] libmachine: Using API Version  1
	I0127 03:02:15.955253 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:15.955313 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:15.955413 1119007 main.go:141] libmachine: Using API Version  1
	I0127 03:02:15.955437 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:15.955766 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:15.955849 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:15.955886 1119007 main.go:141] libmachine: Using API Version  1
	I0127 03:02:15.955947 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:15.956424 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:15.956463 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:15.956469 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:15.956507 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:15.956724 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:15.956927 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetState
	I0127 03:02:15.957100 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:15.957708 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:15.957746 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:15.960884 1119007 addons.go:238] Setting addon default-storageclass=true in "no-preload-887091"
	W0127 03:02:15.960910 1119007 addons.go:247] addon default-storageclass should already be in state true
	I0127 03:02:15.960960 1119007 host.go:66] Checking if "no-preload-887091" exists ...
	I0127 03:02:15.961323 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:15.961366 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:15.977560 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43979
	I0127 03:02:15.978028 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:15.978173 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42633
	I0127 03:02:15.978517 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46571
	I0127 03:02:15.978693 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:15.978872 1119007 main.go:141] libmachine: Using API Version  1
	I0127 03:02:15.978901 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:15.979226 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:15.979298 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:15.979562 1119007 main.go:141] libmachine: Using API Version  1
	I0127 03:02:15.979576 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetState
	I0127 03:02:15.979593 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:15.979923 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:15.980113 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetState
	I0127 03:02:15.980289 1119007 main.go:141] libmachine: Using API Version  1
	I0127 03:02:15.980304 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:15.980894 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:15.981251 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetState
	I0127 03:02:15.981811 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
	I0127 03:02:15.982385 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
	I0127 03:02:15.983016 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
	I0127 03:02:15.983162 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42601
	I0127 03:02:15.983756 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:15.983837 1119007 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 03:02:15.984185 1119007 main.go:141] libmachine: Using API Version  1
	I0127 03:02:15.984202 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:15.984606 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:15.985117 1119007 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 03:02:15.985204 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:15.985237 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:15.985253 1119007 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:02:15.985273 1119007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 03:02:15.985297 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
	I0127 03:02:15.985367 1119007 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 03:02:15.986458 1119007 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 03:02:15.986480 1119007 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 03:02:15.986546 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
	I0127 03:02:15.987599 1119007 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 03:02:15.988812 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 03:02:15.988933 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 03:02:15.989273 1119007 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 03:02:15.989471 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
	I0127 03:02:15.989502 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 03:02:15.989571 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
	I0127 03:02:15.989716 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
	I0127 03:02:15.989884 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
	I0127 03:02:15.990033 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
	I0127 03:02:15.990172 1119007 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/no-preload-887091/id_rsa Username:docker}
	I0127 03:02:15.990858 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 03:02:15.991445 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
	I0127 03:02:15.991468 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 03:02:15.991628 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
	I0127 03:02:15.991828 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
	I0127 03:02:15.992248 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
	I0127 03:02:15.992428 1119007 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/no-preload-887091/id_rsa Username:docker}
	I0127 03:02:15.993703 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 03:02:15.994218 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
	I0127 03:02:15.994244 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 03:02:15.994557 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
	I0127 03:02:15.994742 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
	I0127 03:02:15.994902 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
	I0127 03:02:15.995042 1119007 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/no-preload-887091/id_rsa Username:docker}
	I0127 03:02:16.004890 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45539
	I0127 03:02:16.005324 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:16.005841 1119007 main.go:141] libmachine: Using API Version  1
	I0127 03:02:16.005861 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:16.006249 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:16.006454 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetState
	I0127 03:02:16.008475 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
	I0127 03:02:16.008706 1119007 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 03:02:16.008719 1119007 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 03:02:16.008733 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
	I0127 03:02:16.011722 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 03:02:16.012561 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
	I0127 03:02:16.012637 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
	I0127 03:02:16.012663 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 03:02:16.012777 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
	I0127 03:02:16.012973 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
	I0127 03:02:16.013155 1119007 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/no-preload-887091/id_rsa Username:docker}
	I0127 03:02:16.171165 1119007 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 03:02:16.193562 1119007 node_ready.go:35] waiting up to 6m0s for node "no-preload-887091" to be "Ready" ...
	I0127 03:02:16.246946 1119007 node_ready.go:49] node "no-preload-887091" has status "Ready":"True"
	I0127 03:02:16.246978 1119007 node_ready.go:38] duration metric: took 53.383421ms for node "no-preload-887091" to be "Ready" ...
	I0127 03:02:16.246992 1119007 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:02:16.274293 1119007 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-86j6q" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:16.274621 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 03:02:16.274647 1119007 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 03:02:16.305232 1119007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 03:02:16.327479 1119007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:02:16.328118 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 03:02:16.328136 1119007 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 03:02:16.428329 1119007 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 03:02:16.428364 1119007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 03:02:16.466201 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 03:02:16.466236 1119007 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 03:02:16.599271 1119007 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 03:02:16.599315 1119007 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 03:02:16.638608 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 03:02:16.638637 1119007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 03:02:16.828108 1119007 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 03:02:16.828150 1119007 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 03:02:16.838645 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 03:02:16.838676 1119007 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 03:02:16.984773 1119007 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:16.984808 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
	I0127 03:02:16.985269 1119007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:16.985286 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:16.985295 1119007 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:16.985302 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
	I0127 03:02:16.985629 1119007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:16.985649 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:17.004424 1119007 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:17.004447 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
	I0127 03:02:17.004789 1119007 main.go:141] libmachine: (no-preload-887091) DBG | Closing plugin on server side
	I0127 03:02:17.004799 1119007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:17.004830 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:17.011294 1119007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 03:02:17.011605 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 03:02:17.011624 1119007 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 03:02:17.109457 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 03:02:17.109494 1119007 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 03:02:17.218037 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 03:02:17.218071 1119007 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 03:02:17.272264 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 03:02:17.272299 1119007 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 03:02:17.346698 1119007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 03:02:17.903867 1119007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.57633993s)
	I0127 03:02:17.903940 1119007 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:17.903958 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
	I0127 03:02:17.904299 1119007 main.go:141] libmachine: (no-preload-887091) DBG | Closing plugin on server side
	I0127 03:02:17.904382 1119007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:17.904399 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:17.904412 1119007 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:17.904418 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
	I0127 03:02:17.904680 1119007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:17.904702 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:17.904715 1119007 main.go:141] libmachine: (no-preload-887091) DBG | Closing plugin on server side
	I0127 03:02:18.291876 1119007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.280535535s)
	I0127 03:02:18.291939 1119007 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:18.291962 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
	I0127 03:02:18.292296 1119007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:18.292315 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:18.292323 1119007 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:18.292329 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
	I0127 03:02:18.293045 1119007 main.go:141] libmachine: (no-preload-887091) DBG | Closing plugin on server side
	I0127 03:02:18.293120 1119007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:18.293147 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:18.293165 1119007 addons.go:479] Verifying addon metrics-server=true in "no-preload-887091"
	I0127 03:02:18.308148 1119007 pod_ready.go:103] pod "coredns-668d6bf9bc-86j6q" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:19.202588 1119007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.855830221s)
	I0127 03:02:19.202668 1119007 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:19.202685 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
	I0127 03:02:19.202996 1119007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:19.203014 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:19.203031 1119007 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:19.203046 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
	I0127 03:02:19.203365 1119007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:19.203408 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:19.205207 1119007 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-887091 addons enable metrics-server
	
	I0127 03:02:19.206884 1119007 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0127 03:02:19.208319 1119007 addons.go:514] duration metric: took 3.279531879s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0127 03:02:20.784826 1119007 pod_ready.go:103] pod "coredns-668d6bf9bc-86j6q" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:23.282142 1119007 pod_ready.go:103] pod "coredns-668d6bf9bc-86j6q" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:25.286311 1119007 pod_ready.go:93] pod "coredns-668d6bf9bc-86j6q" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:25.286348 1119007 pod_ready.go:82] duration metric: took 9.012019717s for pod "coredns-668d6bf9bc-86j6q" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.286363 1119007 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-fk8cw" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.296155 1119007 pod_ready.go:93] pod "coredns-668d6bf9bc-fk8cw" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:25.296266 1119007 pod_ready.go:82] duration metric: took 9.891475ms for pod "coredns-668d6bf9bc-fk8cw" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.296304 1119007 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.306424 1119007 pod_ready.go:93] pod "etcd-no-preload-887091" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:25.306520 1119007 pod_ready.go:82] duration metric: took 10.178061ms for pod "etcd-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.306550 1119007 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.316320 1119007 pod_ready.go:93] pod "kube-apiserver-no-preload-887091" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:25.316353 1119007 pod_ready.go:82] duration metric: took 9.779811ms for pod "kube-apiserver-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.316368 1119007 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.324972 1119007 pod_ready.go:93] pod "kube-controller-manager-no-preload-887091" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:25.324998 1119007 pod_ready.go:82] duration metric: took 8.620263ms for pod "kube-controller-manager-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.325011 1119007 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-45pz6" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.682761 1119007 pod_ready.go:93] pod "kube-proxy-45pz6" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:25.682792 1119007 pod_ready.go:82] duration metric: took 357.773408ms for pod "kube-proxy-45pz6" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.682807 1119007 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:26.086323 1119007 pod_ready.go:93] pod "kube-scheduler-no-preload-887091" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:26.086365 1119007 pod_ready.go:82] duration metric: took 403.548355ms for pod "kube-scheduler-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:26.086378 1119007 pod_ready.go:39] duration metric: took 9.839373235s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:02:26.086398 1119007 api_server.go:52] waiting for apiserver process to appear ...
	I0127 03:02:26.086493 1119007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:26.115441 1119007 api_server.go:72] duration metric: took 10.186729821s to wait for apiserver process to appear ...
	I0127 03:02:26.115474 1119007 api_server.go:88] waiting for apiserver healthz status ...
	I0127 03:02:26.115503 1119007 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0127 03:02:26.125822 1119007 api_server.go:279] https://192.168.61.201:8443/healthz returned 200:
	ok
	I0127 03:02:26.127247 1119007 api_server.go:141] control plane version: v1.32.1
	I0127 03:02:26.127277 1119007 api_server.go:131] duration metric: took 11.792506ms to wait for apiserver health ...
	I0127 03:02:26.127289 1119007 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 03:02:26.285021 1119007 system_pods.go:59] 9 kube-system pods found
	I0127 03:02:26.285059 1119007 system_pods.go:61] "coredns-668d6bf9bc-86j6q" [9b85ae79-ae19-4cd1-a0da-0343c9e2801c] Running
	I0127 03:02:26.285067 1119007 system_pods.go:61] "coredns-668d6bf9bc-fk8cw" [c7075b92-233d-4a5a-b864-ef349d7125e7] Running
	I0127 03:02:26.285073 1119007 system_pods.go:61] "etcd-no-preload-887091" [45d4a5fc-797f-4d4a-9204-049ebcdc5647] Running
	I0127 03:02:26.285079 1119007 system_pods.go:61] "kube-apiserver-no-preload-887091" [11e7ea14-678a-408f-a722-8fedb984c086] Running
	I0127 03:02:26.285085 1119007 system_pods.go:61] "kube-controller-manager-no-preload-887091" [95d63381-33aa-428b-80b1-6e8ccf96b8a1] Running
	I0127 03:02:26.285089 1119007 system_pods.go:61] "kube-proxy-45pz6" [b3aa986f-d6d8-4050-8760-438aabd39bdc] Running
	I0127 03:02:26.285094 1119007 system_pods.go:61] "kube-scheduler-no-preload-887091" [5065d24f-256d-43ad-bd00-1d5868b7214d] Running
	I0127 03:02:26.285104 1119007 system_pods.go:61] "metrics-server-f79f97bbb-vshg4" [33ae36ed-d8a4-4d60-bcd0-1becf2d490bc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 03:02:26.285110 1119007 system_pods.go:61] "storage-provisioner" [127a1f13-b70c-4482-bd8b-14a6bf24b663] Running
	I0127 03:02:26.285121 1119007 system_pods.go:74] duration metric: took 157.824017ms to wait for pod list to return data ...
	I0127 03:02:26.285134 1119007 default_sa.go:34] waiting for default service account to be created ...
	I0127 03:02:26.480092 1119007 default_sa.go:45] found service account: "default"
	I0127 03:02:26.480128 1119007 default_sa.go:55] duration metric: took 194.984911ms for default service account to be created ...
	I0127 03:02:26.480141 1119007 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 03:02:26.688727 1119007 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p no-preload-887091 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-887091 -n no-preload-887091
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-887091 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-887091 logs -n 25: (1.478230465s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-760492        | old-k8s-version-760492       | jenkins | v1.35.0 | 27 Jan 25 02:56 UTC | 27 Jan 25 02:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-760492                              | old-k8s-version-760492       | jenkins | v1.35.0 | 27 Jan 25 02:56 UTC | 27 Jan 25 02:57 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-887091                  | no-preload-887091            | jenkins | v1.35.0 | 27 Jan 25 02:57 UTC | 27 Jan 25 02:57 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-887091                                   | no-preload-887091            | jenkins | v1.35.0 | 27 Jan 25 02:57 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-264552                 | embed-certs-264552           | jenkins | v1.35.0 | 27 Jan 25 02:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-717075       | default-k8s-diff-port-717075 | jenkins | v1.35.0 | 27 Jan 25 02:57 UTC | 27 Jan 25 02:57 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-264552                                  | embed-certs-264552           | jenkins | v1.35.0 | 27 Jan 25 02:57 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-717075 | jenkins | v1.35.0 | 27 Jan 25 02:57 UTC |                     |
	|         | default-k8s-diff-port-717075                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-760492             | old-k8s-version-760492       | jenkins | v1.35.0 | 27 Jan 25 02:57 UTC | 27 Jan 25 02:57 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-760492                              | old-k8s-version-760492       | jenkins | v1.35.0 | 27 Jan 25 02:57 UTC | 27 Jan 25 03:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | old-k8s-version-760492 image                           | old-k8s-version-760492       | jenkins | v1.35.0 | 27 Jan 25 03:01 UTC | 27 Jan 25 03:01 UTC |
	|         | list --format=json                                     |                              |         |         |                     |                     |
	| pause   | -p old-k8s-version-760492                              | old-k8s-version-760492       | jenkins | v1.35.0 | 27 Jan 25 03:01 UTC | 27 Jan 25 03:01 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p old-k8s-version-760492                              | old-k8s-version-760492       | jenkins | v1.35.0 | 27 Jan 25 03:01 UTC | 27 Jan 25 03:01 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-760492                              | old-k8s-version-760492       | jenkins | v1.35.0 | 27 Jan 25 03:01 UTC | 27 Jan 25 03:01 UTC |
	| delete  | -p old-k8s-version-760492                              | old-k8s-version-760492       | jenkins | v1.35.0 | 27 Jan 25 03:01 UTC | 27 Jan 25 03:01 UTC |
	| start   | -p newest-cni-642127 --memory=2200 --alsologtostderr   | newest-cni-642127            | jenkins | v1.35.0 | 27 Jan 25 03:01 UTC | 27 Jan 25 03:01 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=containerd          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-642127             | newest-cni-642127            | jenkins | v1.35.0 | 27 Jan 25 03:01 UTC | 27 Jan 25 03:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-642127                                   | newest-cni-642127            | jenkins | v1.35.0 | 27 Jan 25 03:01 UTC | 27 Jan 25 03:01 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-642127                  | newest-cni-642127            | jenkins | v1.35.0 | 27 Jan 25 03:02 UTC | 27 Jan 25 03:02 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-642127 --memory=2200 --alsologtostderr   | newest-cni-642127            | jenkins | v1.35.0 | 27 Jan 25 03:02 UTC | 27 Jan 25 03:02 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=containerd          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-642127 image list                           | newest-cni-642127            | jenkins | v1.35.0 | 27 Jan 25 03:02 UTC | 27 Jan 25 03:02 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-642127                                   | newest-cni-642127            | jenkins | v1.35.0 | 27 Jan 25 03:02 UTC | 27 Jan 25 03:02 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-642127                                   | newest-cni-642127            | jenkins | v1.35.0 | 27 Jan 25 03:02 UTC | 27 Jan 25 03:02 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-642127                                   | newest-cni-642127            | jenkins | v1.35.0 | 27 Jan 25 03:02 UTC | 27 Jan 25 03:02 UTC |
	| delete  | -p newest-cni-642127                                   | newest-cni-642127            | jenkins | v1.35.0 | 27 Jan 25 03:02 UTC | 27 Jan 25 03:02 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 03:02:00
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 03:02:00.237835 1121411 out.go:345] Setting OutFile to fd 1 ...
	I0127 03:02:00.238128 1121411 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 03:02:00.238140 1121411 out.go:358] Setting ErrFile to fd 2...
	I0127 03:02:00.238146 1121411 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 03:02:00.238345 1121411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-1057178/.minikube/bin
	I0127 03:02:00.239045 1121411 out.go:352] Setting JSON to false
	I0127 03:02:00.240327 1121411 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":13467,"bootTime":1737933453,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 03:02:00.240474 1121411 start.go:139] virtualization: kvm guest
	I0127 03:02:00.242533 1121411 out.go:177] * [newest-cni-642127] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 03:02:00.244184 1121411 out.go:177]   - MINIKUBE_LOCATION=20316
	I0127 03:02:00.244247 1121411 notify.go:220] Checking for updates...
	I0127 03:02:00.246478 1121411 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 03:02:00.247855 1121411 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20316-1057178/kubeconfig
	I0127 03:02:00.249125 1121411 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-1057178/.minikube
	I0127 03:02:00.250346 1121411 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 03:02:00.251585 1121411 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 03:02:00.253406 1121411 config.go:182] Loaded profile config "newest-cni-642127": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 03:02:00.254032 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:00.254107 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:00.270414 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33505
	I0127 03:02:00.270862 1121411 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:00.271405 1121411 main.go:141] libmachine: Using API Version  1
	I0127 03:02:00.271428 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:00.271776 1121411 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:00.271945 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:00.272173 1121411 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 03:02:00.272461 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:00.272496 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:00.287317 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36083
	I0127 03:02:00.287836 1121411 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:00.288298 1121411 main.go:141] libmachine: Using API Version  1
	I0127 03:02:00.288340 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:00.288708 1121411 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:00.288885 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:00.325767 1121411 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 03:02:00.327047 1121411 start.go:297] selected driver: kvm2
	I0127 03:02:00.327060 1121411 start.go:901] validating driver "kvm2" against &{Name:newest-cni-642127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-642127 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenA
ddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 03:02:00.327183 1121411 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 03:02:00.327982 1121411 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 03:02:00.328064 1121411 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20316-1057178/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 03:02:00.343178 1121411 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 03:02:00.343639 1121411 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 03:02:00.343677 1121411 cni.go:84] Creating CNI manager for ""
	I0127 03:02:00.343730 1121411 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 03:02:00.343763 1121411 start.go:340] cluster config:
	{Name:newest-cni-642127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-642127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 03:02:00.343883 1121411 iso.go:125] acquiring lock: {Name:mkd30bc9d11f9170e89ad95ce7ba25fa6d1e04f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 03:02:00.345590 1121411 out.go:177] * Starting "newest-cni-642127" primary control-plane node in "newest-cni-642127" cluster
	I0127 03:02:00.346774 1121411 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 03:02:00.346814 1121411 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
	I0127 03:02:00.346828 1121411 cache.go:56] Caching tarball of preloaded images
	I0127 03:02:00.346908 1121411 preload.go:172] Found /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 03:02:00.346919 1121411 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
	I0127 03:02:00.347008 1121411 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127/config.json ...
	I0127 03:02:00.347215 1121411 start.go:360] acquireMachinesLock for newest-cni-642127: {Name:mka8dc154c517d64837d06e2f84f8bddd0b82c58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 03:02:00.347258 1121411 start.go:364] duration metric: took 23.213µs to acquireMachinesLock for "newest-cni-642127"
	I0127 03:02:00.347273 1121411 start.go:96] Skipping create...Using existing machine configuration
	I0127 03:02:00.347278 1121411 fix.go:54] fixHost starting: 
	I0127 03:02:00.347525 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:00.347569 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:00.362339 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37759
	I0127 03:02:00.362837 1121411 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:00.363413 1121411 main.go:141] libmachine: Using API Version  1
	I0127 03:02:00.363435 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:00.363738 1121411 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:00.363908 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:00.364065 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetState
	I0127 03:02:00.365643 1121411 fix.go:112] recreateIfNeeded on newest-cni-642127: state=Stopped err=<nil>
	I0127 03:02:00.365669 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	W0127 03:02:00.366076 1121411 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 03:02:00.368560 1121411 out.go:177] * Restarting existing kvm2 VM for "newest-cni-642127" ...
	I0127 03:01:59.553947 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:01.029438 1119007 pod_ready.go:82] duration metric: took 4m0.000430308s for pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace to be "Ready" ...
	E0127 03:02:01.029463 1119007 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 03:02:01.029492 1119007 pod_ready.go:39] duration metric: took 4m12.545085543s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:02:01.029521 1119007 kubeadm.go:597] duration metric: took 4m20.2724454s to restartPrimaryControlPlane
	W0127 03:02:01.029578 1119007 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 03:02:01.029603 1119007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 03:02:03.004910 1119007 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.9752757s)
	I0127 03:02:03.005026 1119007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 03:02:03.022327 1119007 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 03:02:03.033433 1119007 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 03:02:03.043716 1119007 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 03:02:03.043751 1119007 kubeadm.go:157] found existing configuration files:
	
	I0127 03:02:03.043807 1119007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 03:02:03.053848 1119007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 03:02:03.053913 1119007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 03:02:03.064618 1119007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 03:02:03.075259 1119007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 03:02:03.075327 1119007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 03:02:03.087088 1119007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 03:02:03.098909 1119007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 03:02:03.098975 1119007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 03:02:03.110053 1119007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 03:02:03.119864 1119007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 03:02:03.119938 1119007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 03:02:03.130987 1119007 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 03:02:03.185348 1119007 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 03:02:03.185417 1119007 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 03:02:03.314698 1119007 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 03:02:03.314881 1119007 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 03:02:03.315043 1119007 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 03:02:03.324401 1119007 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 03:02:03.326164 1119007 out.go:235]   - Generating certificates and keys ...
	I0127 03:02:03.326268 1119007 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 03:02:03.326359 1119007 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 03:02:03.326477 1119007 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 03:02:03.326572 1119007 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 03:02:03.326663 1119007 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 03:02:03.326738 1119007 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 03:02:03.326859 1119007 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 03:02:03.327073 1119007 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 03:02:03.327208 1119007 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 03:02:03.327338 1119007 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 03:02:03.327408 1119007 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 03:02:03.327502 1119007 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 03:02:03.521123 1119007 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 03:02:03.756848 1119007 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 03:02:03.911089 1119007 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 03:02:04.122010 1119007 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 03:02:04.383085 1119007 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 03:02:04.383614 1119007 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 03:02:04.386205 1119007 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 03:02:00.791431 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:02.793532 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:00.101750 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:02.600452 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:00.369945 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Start
	I0127 03:02:00.370121 1121411 main.go:141] libmachine: (newest-cni-642127) starting domain...
	I0127 03:02:00.370143 1121411 main.go:141] libmachine: (newest-cni-642127) ensuring networks are active...
	I0127 03:02:00.370872 1121411 main.go:141] libmachine: (newest-cni-642127) Ensuring network default is active
	I0127 03:02:00.371180 1121411 main.go:141] libmachine: (newest-cni-642127) Ensuring network mk-newest-cni-642127 is active
	I0127 03:02:00.371540 1121411 main.go:141] libmachine: (newest-cni-642127) getting domain XML...
	I0127 03:02:00.372193 1121411 main.go:141] libmachine: (newest-cni-642127) creating domain...
	I0127 03:02:01.655632 1121411 main.go:141] libmachine: (newest-cni-642127) waiting for IP...
	I0127 03:02:01.656638 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:01.657157 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:01.657251 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:01.657139 1121446 retry.go:31] will retry after 277.784658ms: waiting for domain to come up
	I0127 03:02:01.936660 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:01.937240 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:01.937271 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:01.937207 1121446 retry.go:31] will retry after 238.163617ms: waiting for domain to come up
	I0127 03:02:02.176792 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:02.177474 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:02.177544 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:02.177436 1121446 retry.go:31] will retry after 380.939356ms: waiting for domain to come up
	I0127 03:02:02.560097 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:02.560666 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:02.560700 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:02.560618 1121446 retry.go:31] will retry after 505.552982ms: waiting for domain to come up
	I0127 03:02:03.067443 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:03.067968 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:03.068040 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:03.067965 1121446 retry.go:31] will retry after 727.427105ms: waiting for domain to come up
	I0127 03:02:03.797031 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:03.797596 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:03.797621 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:03.797562 1121446 retry.go:31] will retry after 647.611718ms: waiting for domain to come up
	I0127 03:02:04.447043 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:04.447523 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:04.447556 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:04.447508 1121446 retry.go:31] will retry after 984.747883ms: waiting for domain to come up
	I0127 03:02:04.388044 1119007 out.go:235]   - Booting up control plane ...
	I0127 03:02:04.388157 1119007 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 03:02:04.388265 1119007 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 03:02:04.388373 1119007 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 03:02:04.409379 1119007 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 03:02:04.416389 1119007 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 03:02:04.416479 1119007 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 03:02:04.571487 1119007 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 03:02:04.571690 1119007 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 03:02:05.072916 1119007 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.574288ms
	I0127 03:02:05.073090 1119007 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 03:02:05.292102 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:07.292399 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:09.792796 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:05.099225 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:07.099594 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:09.600572 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:05.434383 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:05.434961 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:05.434994 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:05.434926 1121446 retry.go:31] will retry after 1.239188819s: waiting for domain to come up
	I0127 03:02:06.675638 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:06.676209 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:06.676244 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:06.676172 1121446 retry.go:31] will retry after 1.489275436s: waiting for domain to come up
	I0127 03:02:08.167884 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:08.168365 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:08.168402 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:08.168327 1121446 retry.go:31] will retry after 1.739982698s: waiting for domain to come up
	I0127 03:02:09.910362 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:09.910871 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:09.910964 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:09.910871 1121446 retry.go:31] will retry after 2.79669233s: waiting for domain to come up
	I0127 03:02:10.574512 1119007 kubeadm.go:310] [api-check] The API server is healthy after 5.501444049s
	I0127 03:02:10.590265 1119007 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 03:02:10.612200 1119007 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 03:02:10.650305 1119007 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 03:02:10.650585 1119007 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-887091 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 03:02:10.661688 1119007 kubeadm.go:310] [bootstrap-token] Using token: 25alvo.7xrmg7nh4q5v903n
	I0127 03:02:10.663119 1119007 out.go:235]   - Configuring RBAC rules ...
	I0127 03:02:10.663280 1119007 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 03:02:10.671888 1119007 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 03:02:10.685310 1119007 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 03:02:10.690214 1119007 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 03:02:10.694363 1119007 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 03:02:10.698959 1119007 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 03:02:10.982964 1119007 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 03:02:11.430752 1119007 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 03:02:11.982446 1119007 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 03:02:11.984681 1119007 kubeadm.go:310] 
	I0127 03:02:11.984836 1119007 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 03:02:11.984859 1119007 kubeadm.go:310] 
	I0127 03:02:11.984989 1119007 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 03:02:11.985010 1119007 kubeadm.go:310] 
	I0127 03:02:11.985048 1119007 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 03:02:11.985139 1119007 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 03:02:11.985214 1119007 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 03:02:11.985223 1119007 kubeadm.go:310] 
	I0127 03:02:11.985308 1119007 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 03:02:11.985320 1119007 kubeadm.go:310] 
	I0127 03:02:11.985386 1119007 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 03:02:11.985394 1119007 kubeadm.go:310] 
	I0127 03:02:11.985466 1119007 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 03:02:11.985573 1119007 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 03:02:11.985666 1119007 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 03:02:11.985676 1119007 kubeadm.go:310] 
	I0127 03:02:11.985787 1119007 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 03:02:11.985893 1119007 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 03:02:11.985903 1119007 kubeadm.go:310] 
	I0127 03:02:11.986015 1119007 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 25alvo.7xrmg7nh4q5v903n \
	I0127 03:02:11.986154 1119007 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0bba8d4e4b3afb129d2d18e2e045cd48b3419c300ae73ce15b73c31a6c21b1ba \
	I0127 03:02:11.986187 1119007 kubeadm.go:310] 	--control-plane 
	I0127 03:02:11.986194 1119007 kubeadm.go:310] 
	I0127 03:02:11.986302 1119007 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 03:02:11.986313 1119007 kubeadm.go:310] 
	I0127 03:02:11.986421 1119007 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 25alvo.7xrmg7nh4q5v903n \
	I0127 03:02:11.986559 1119007 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0bba8d4e4b3afb129d2d18e2e045cd48b3419c300ae73ce15b73c31a6c21b1ba 
	I0127 03:02:11.988046 1119007 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 03:02:11.988085 1119007 cni.go:84] Creating CNI manager for ""
	I0127 03:02:11.988096 1119007 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 03:02:11.989984 1119007 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 03:02:11.991565 1119007 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 03:02:12.008152 1119007 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 03:02:12.031285 1119007 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 03:02:12.031368 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:12.031415 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-887091 minikube.k8s.io/updated_at=2025_01_27T03_02_12_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95 minikube.k8s.io/name=no-preload-887091 minikube.k8s.io/primary=true
	I0127 03:02:12.301916 1119007 ops.go:34] apiserver oom_adj: -16
	I0127 03:02:12.302079 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:12.802985 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:11.795142 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:14.292215 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:11.613207 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:14.098783 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:12.710060 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:12.710698 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:12.710737 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:12.710630 1121446 retry.go:31] will retry after 2.899766509s: waiting for domain to come up
	I0127 03:02:13.302566 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:13.802370 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:14.302582 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:14.802350 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:15.302355 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:15.802132 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:15.926758 1119007 kubeadm.go:1113] duration metric: took 3.895467932s to wait for elevateKubeSystemPrivileges
	I0127 03:02:15.926808 1119007 kubeadm.go:394] duration metric: took 4m35.245756492s to StartCluster
	I0127 03:02:15.926834 1119007 settings.go:142] acquiring lock: {Name:mkfac79776d8549aa482287d1af528efdec15d32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:15.926944 1119007 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20316-1057178/kubeconfig
	I0127 03:02:15.928428 1119007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-1057178/kubeconfig: {Name:mke4bd9fc891569e5d6830fdf173fa5043f6c0a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:15.928677 1119007 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.201 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 03:02:15.928795 1119007 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 03:02:15.928913 1119007 config.go:182] Loaded profile config "no-preload-887091": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 03:02:15.928932 1119007 addons.go:69] Setting metrics-server=true in profile "no-preload-887091"
	I0127 03:02:15.928966 1119007 addons.go:238] Setting addon metrics-server=true in "no-preload-887091"
	I0127 03:02:15.928977 1119007 addons.go:69] Setting dashboard=true in profile "no-preload-887091"
	W0127 03:02:15.928985 1119007 addons.go:247] addon metrics-server should already be in state true
	I0127 03:02:15.928991 1119007 addons.go:238] Setting addon dashboard=true in "no-preload-887091"
	I0127 03:02:15.928918 1119007 addons.go:69] Setting storage-provisioner=true in profile "no-preload-887091"
	I0127 03:02:15.929020 1119007 host.go:66] Checking if "no-preload-887091" exists ...
	I0127 03:02:15.929025 1119007 addons.go:238] Setting addon storage-provisioner=true in "no-preload-887091"
	W0127 03:02:15.929036 1119007 addons.go:247] addon storage-provisioner should already be in state true
	I0127 03:02:15.928961 1119007 addons.go:69] Setting default-storageclass=true in profile "no-preload-887091"
	I0127 03:02:15.929073 1119007 host.go:66] Checking if "no-preload-887091" exists ...
	I0127 03:02:15.929093 1119007 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-887091"
	W0127 03:02:15.928999 1119007 addons.go:247] addon dashboard should already be in state true
	I0127 03:02:15.929175 1119007 host.go:66] Checking if "no-preload-887091" exists ...
	I0127 03:02:15.929496 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:15.929496 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:15.929544 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:15.929557 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:15.929547 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:15.929584 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:15.929499 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:15.929692 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:15.930306 1119007 out.go:177] * Verifying Kubernetes components...
	I0127 03:02:15.931877 1119007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:02:15.952533 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34429
	I0127 03:02:15.952549 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44595
	I0127 03:02:15.952581 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46601
	I0127 03:02:15.952721 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39019
	I0127 03:02:15.954529 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:15.954547 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:15.954808 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:15.955205 1119007 main.go:141] libmachine: Using API Version  1
	I0127 03:02:15.955229 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:15.955233 1119007 main.go:141] libmachine: Using API Version  1
	I0127 03:02:15.955253 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:15.955313 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:15.955413 1119007 main.go:141] libmachine: Using API Version  1
	I0127 03:02:15.955437 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:15.955766 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:15.955849 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:15.955886 1119007 main.go:141] libmachine: Using API Version  1
	I0127 03:02:15.955947 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:15.956424 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:15.956463 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:15.956469 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:15.956507 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:15.956724 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:15.956927 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetState
	I0127 03:02:15.957100 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:15.957708 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:15.957746 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:15.960884 1119007 addons.go:238] Setting addon default-storageclass=true in "no-preload-887091"
	W0127 03:02:15.960910 1119007 addons.go:247] addon default-storageclass should already be in state true
	I0127 03:02:15.960960 1119007 host.go:66] Checking if "no-preload-887091" exists ...
	I0127 03:02:15.961323 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:15.961366 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:15.977560 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43979
	I0127 03:02:15.978028 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:15.978173 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42633
	I0127 03:02:15.978517 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46571
	I0127 03:02:15.978693 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:15.978872 1119007 main.go:141] libmachine: Using API Version  1
	I0127 03:02:15.978901 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:15.979226 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:15.979298 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:15.979562 1119007 main.go:141] libmachine: Using API Version  1
	I0127 03:02:15.979576 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetState
	I0127 03:02:15.979593 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:15.979923 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:15.980113 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetState
	I0127 03:02:15.980289 1119007 main.go:141] libmachine: Using API Version  1
	I0127 03:02:15.980304 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:15.980894 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:15.981251 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetState
	I0127 03:02:15.981811 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
	I0127 03:02:15.982385 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
	I0127 03:02:15.983016 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
	I0127 03:02:15.983162 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42601
	I0127 03:02:15.983756 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:15.983837 1119007 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 03:02:15.984185 1119007 main.go:141] libmachine: Using API Version  1
	I0127 03:02:15.984202 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:15.984606 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:15.985117 1119007 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 03:02:15.985204 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:15.985237 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:15.985253 1119007 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:02:15.985273 1119007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 03:02:15.985297 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
	I0127 03:02:15.985367 1119007 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 03:02:15.986458 1119007 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 03:02:15.986480 1119007 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 03:02:15.986546 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
	I0127 03:02:15.987599 1119007 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 03:02:15.988812 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 03:02:15.988933 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 03:02:15.989273 1119007 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 03:02:15.989471 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
	I0127 03:02:15.989502 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 03:02:15.989571 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
	I0127 03:02:15.989716 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
	I0127 03:02:15.989884 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
	I0127 03:02:15.990033 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
	I0127 03:02:15.990172 1119007 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/no-preload-887091/id_rsa Username:docker}
	I0127 03:02:15.990858 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 03:02:15.991445 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
	I0127 03:02:15.991468 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 03:02:15.991628 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
	I0127 03:02:15.991828 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
	I0127 03:02:15.992248 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
	I0127 03:02:15.992428 1119007 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/no-preload-887091/id_rsa Username:docker}
	I0127 03:02:15.993703 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 03:02:15.994218 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
	I0127 03:02:15.994244 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 03:02:15.994557 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
	I0127 03:02:15.994742 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
	I0127 03:02:15.994902 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
	I0127 03:02:15.995042 1119007 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/no-preload-887091/id_rsa Username:docker}
	I0127 03:02:16.004890 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45539
	I0127 03:02:16.005324 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:16.005841 1119007 main.go:141] libmachine: Using API Version  1
	I0127 03:02:16.005861 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:16.006249 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:16.006454 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetState
	I0127 03:02:16.008475 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
	I0127 03:02:16.008706 1119007 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 03:02:16.008719 1119007 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 03:02:16.008733 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
	I0127 03:02:16.011722 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 03:02:16.012561 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
	I0127 03:02:16.012637 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
	I0127 03:02:16.012663 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 03:02:16.012777 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
	I0127 03:02:16.012973 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
	I0127 03:02:16.013155 1119007 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/no-preload-887091/id_rsa Username:docker}
	I0127 03:02:16.171165 1119007 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 03:02:16.193562 1119007 node_ready.go:35] waiting up to 6m0s for node "no-preload-887091" to be "Ready" ...
	I0127 03:02:16.246946 1119007 node_ready.go:49] node "no-preload-887091" has status "Ready":"True"
	I0127 03:02:16.246978 1119007 node_ready.go:38] duration metric: took 53.383421ms for node "no-preload-887091" to be "Ready" ...
	I0127 03:02:16.246992 1119007 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:02:16.274293 1119007 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-86j6q" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:16.274621 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 03:02:16.274647 1119007 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 03:02:16.305232 1119007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 03:02:16.327479 1119007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:02:16.328118 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 03:02:16.328136 1119007 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 03:02:16.428329 1119007 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 03:02:16.428364 1119007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 03:02:16.466201 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 03:02:16.466236 1119007 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 03:02:16.599271 1119007 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 03:02:16.599315 1119007 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 03:02:16.638608 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 03:02:16.638637 1119007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 03:02:16.828108 1119007 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 03:02:16.828150 1119007 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 03:02:16.838645 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 03:02:16.838676 1119007 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 03:02:16.984773 1119007 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:16.984808 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
	I0127 03:02:16.985269 1119007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:16.985286 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:16.985295 1119007 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:16.985302 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
	I0127 03:02:16.985629 1119007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:16.985649 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:17.004424 1119007 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:17.004447 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
	I0127 03:02:17.004789 1119007 main.go:141] libmachine: (no-preload-887091) DBG | Closing plugin on server side
	I0127 03:02:17.004799 1119007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:17.004830 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:17.011294 1119007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 03:02:17.011605 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 03:02:17.011624 1119007 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 03:02:17.109457 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 03:02:17.109494 1119007 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 03:02:17.218037 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 03:02:17.218071 1119007 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 03:02:17.272264 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 03:02:17.272299 1119007 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 03:02:17.346698 1119007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 03:02:17.903867 1119007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.57633993s)
	I0127 03:02:17.903940 1119007 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:17.903958 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
	I0127 03:02:17.904299 1119007 main.go:141] libmachine: (no-preload-887091) DBG | Closing plugin on server side
	I0127 03:02:17.904382 1119007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:17.904399 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:17.904412 1119007 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:17.904418 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
	I0127 03:02:17.904680 1119007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:17.904702 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:17.904715 1119007 main.go:141] libmachine: (no-preload-887091) DBG | Closing plugin on server side
	I0127 03:02:18.291876 1119007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.280535535s)
	I0127 03:02:18.291939 1119007 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:18.291962 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
	I0127 03:02:18.292296 1119007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:18.292315 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:18.292323 1119007 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:18.292329 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
	I0127 03:02:18.293045 1119007 main.go:141] libmachine: (no-preload-887091) DBG | Closing plugin on server side
	I0127 03:02:18.293120 1119007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:18.293147 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:18.293165 1119007 addons.go:479] Verifying addon metrics-server=true in "no-preload-887091"
	I0127 03:02:18.308148 1119007 pod_ready.go:103] pod "coredns-668d6bf9bc-86j6q" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:19.202588 1119007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.855830221s)
	I0127 03:02:19.202668 1119007 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:19.202685 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
	I0127 03:02:19.202996 1119007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:19.203014 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:19.203031 1119007 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:19.203046 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
	I0127 03:02:19.203365 1119007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:19.203408 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:19.205207 1119007 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-887091 addons enable metrics-server
	
	I0127 03:02:19.206884 1119007 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0127 03:02:16.293451 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:18.793149 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:19.785753 1119263 pod_ready.go:82] duration metric: took 4m0.001003583s for pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace to be "Ready" ...
	E0127 03:02:19.785781 1119263 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 03:02:19.785801 1119263 pod_ready.go:39] duration metric: took 4m12.565302655s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:02:19.785832 1119263 kubeadm.go:597] duration metric: took 4m20.078127881s to restartPrimaryControlPlane
	W0127 03:02:19.785891 1119263 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 03:02:19.785918 1119263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 03:02:16.101837 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:18.600416 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:15.612007 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:15.612503 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:15.612532 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:15.612477 1121446 retry.go:31] will retry after 4.281984487s: waiting for domain to come up
	I0127 03:02:19.898517 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:19.899156 1121411 main.go:141] libmachine: (newest-cni-642127) found domain IP: 192.168.50.51
	I0127 03:02:19.899184 1121411 main.go:141] libmachine: (newest-cni-642127) reserving static IP address...
	I0127 03:02:19.899199 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has current primary IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:19.899706 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "newest-cni-642127", mac: "52:54:00:b2:c0:f5", ip: "192.168.50.51"} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:19.899748 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | skip adding static IP to network mk-newest-cni-642127 - found existing host DHCP lease matching {name: "newest-cni-642127", mac: "52:54:00:b2:c0:f5", ip: "192.168.50.51"}
	I0127 03:02:19.899765 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | Getting to WaitForSSH function...
	I0127 03:02:19.899786 1121411 main.go:141] libmachine: (newest-cni-642127) reserved static IP address 192.168.50.51 for domain newest-cni-642127
	I0127 03:02:19.899794 1121411 main.go:141] libmachine: (newest-cni-642127) waiting for SSH...
	I0127 03:02:19.902680 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:19.903077 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:19.903108 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:19.903425 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | Using SSH client type: external
	I0127 03:02:19.903455 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | Using SSH private key: /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa (-rw-------)
	I0127 03:02:19.903497 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 03:02:19.903528 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | About to run SSH command:
	I0127 03:02:19.903545 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | exit 0
	I0127 03:02:20.033236 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | SSH cmd err, output: <nil>: 
	I0127 03:02:20.033650 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetConfigRaw
	I0127 03:02:20.034423 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetIP
	I0127 03:02:20.037477 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.038000 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:20.038034 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.038292 1121411 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127/config.json ...
	I0127 03:02:20.038569 1121411 machine.go:93] provisionDockerMachine start ...
	I0127 03:02:20.038593 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:20.038817 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:20.041604 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.042029 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:20.042058 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.042374 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
	I0127 03:02:20.042730 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:20.042972 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:20.043158 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
	I0127 03:02:20.043362 1121411 main.go:141] libmachine: Using SSH client type: native
	I0127 03:02:20.043631 1121411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0127 03:02:20.043646 1121411 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 03:02:20.162052 1121411 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 03:02:20.162088 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetMachineName
	I0127 03:02:20.162389 1121411 buildroot.go:166] provisioning hostname "newest-cni-642127"
	I0127 03:02:20.162416 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetMachineName
	I0127 03:02:20.162603 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:20.166195 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.166703 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:20.166735 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.167015 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
	I0127 03:02:20.167255 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:20.167440 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:20.167629 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
	I0127 03:02:20.167847 1121411 main.go:141] libmachine: Using SSH client type: native
	I0127 03:02:20.168082 1121411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0127 03:02:20.168098 1121411 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-642127 && echo "newest-cni-642127" | sudo tee /etc/hostname
	I0127 03:02:19.208319 1119007 addons.go:514] duration metric: took 3.279531879s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0127 03:02:20.784826 1119007 pod_ready.go:103] pod "coredns-668d6bf9bc-86j6q" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:20.304578 1121411 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-642127
	
	I0127 03:02:20.304614 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:20.307961 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.308494 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:20.308576 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.308725 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
	I0127 03:02:20.308929 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:20.309194 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:20.309354 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
	I0127 03:02:20.309604 1121411 main.go:141] libmachine: Using SSH client type: native
	I0127 03:02:20.309846 1121411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0127 03:02:20.309865 1121411 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-642127' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-642127/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-642127' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 03:02:20.431545 1121411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 03:02:20.431586 1121411 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20316-1057178/.minikube CaCertPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20316-1057178/.minikube}
	I0127 03:02:20.431617 1121411 buildroot.go:174] setting up certificates
	I0127 03:02:20.431633 1121411 provision.go:84] configureAuth start
	I0127 03:02:20.431649 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetMachineName
	I0127 03:02:20.431999 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetIP
	I0127 03:02:20.435425 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.435885 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:20.435918 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.436172 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:20.439389 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.439969 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:20.440002 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.440288 1121411 provision.go:143] copyHostCerts
	I0127 03:02:20.440368 1121411 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.pem, removing ...
	I0127 03:02:20.440392 1121411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.pem
	I0127 03:02:20.440475 1121411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.pem (1078 bytes)
	I0127 03:02:20.440610 1121411 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-1057178/.minikube/cert.pem, removing ...
	I0127 03:02:20.440672 1121411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-1057178/.minikube/cert.pem
	I0127 03:02:20.440724 1121411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20316-1057178/.minikube/cert.pem (1123 bytes)
	I0127 03:02:20.440826 1121411 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-1057178/.minikube/key.pem, removing ...
	I0127 03:02:20.440838 1121411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-1057178/.minikube/key.pem
	I0127 03:02:20.440872 1121411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20316-1057178/.minikube/key.pem (1675 bytes)
	I0127 03:02:20.441000 1121411 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca-key.pem org=jenkins.newest-cni-642127 san=[127.0.0.1 192.168.50.51 localhost minikube newest-cni-642127]
	I0127 03:02:20.582957 1121411 provision.go:177] copyRemoteCerts
	I0127 03:02:20.583042 1121411 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 03:02:20.583082 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:20.586468 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.586937 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:20.586967 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.587297 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
	I0127 03:02:20.587493 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:20.587653 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
	I0127 03:02:20.587816 1121411 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa Username:docker}
	I0127 03:02:20.678286 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 03:02:20.710984 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0127 03:02:20.743521 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 03:02:20.776342 1121411 provision.go:87] duration metric: took 344.690364ms to configureAuth
	I0127 03:02:20.776390 1121411 buildroot.go:189] setting minikube options for container-runtime
	I0127 03:02:20.776645 1121411 config.go:182] Loaded profile config "newest-cni-642127": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 03:02:20.776665 1121411 machine.go:96] duration metric: took 738.080097ms to provisionDockerMachine
	I0127 03:02:20.776676 1121411 start.go:293] postStartSetup for "newest-cni-642127" (driver="kvm2")
	I0127 03:02:20.776689 1121411 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 03:02:20.776728 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:20.777166 1121411 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 03:02:20.777201 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:20.781262 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.781754 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:20.781782 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.782169 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
	I0127 03:02:20.782409 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:20.782633 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
	I0127 03:02:20.782886 1121411 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa Username:docker}
	I0127 03:02:20.877090 1121411 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 03:02:20.882893 1121411 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 03:02:20.882941 1121411 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-1057178/.minikube/addons for local assets ...
	I0127 03:02:20.883012 1121411 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-1057178/.minikube/files for local assets ...
	I0127 03:02:20.883121 1121411 filesync.go:149] local asset: /home/jenkins/minikube-integration/20316-1057178/.minikube/files/etc/ssl/certs/10644392.pem -> 10644392.pem in /etc/ssl/certs
	I0127 03:02:20.883262 1121411 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 03:02:20.897501 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/files/etc/ssl/certs/10644392.pem --> /etc/ssl/certs/10644392.pem (1708 bytes)
	I0127 03:02:20.927044 1121411 start.go:296] duration metric: took 150.330171ms for postStartSetup
	I0127 03:02:20.927103 1121411 fix.go:56] duration metric: took 20.579822967s for fixHost
	I0127 03:02:20.927133 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:20.930644 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.931093 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:20.931129 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.931414 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
	I0127 03:02:20.931717 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:20.931919 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:20.932105 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
	I0127 03:02:20.932280 1121411 main.go:141] libmachine: Using SSH client type: native
	I0127 03:02:20.932530 1121411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0127 03:02:20.932545 1121411 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 03:02:21.046461 1121411 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737946941.010071066
	
	I0127 03:02:21.046493 1121411 fix.go:216] guest clock: 1737946941.010071066
	I0127 03:02:21.046504 1121411 fix.go:229] Guest: 2025-01-27 03:02:21.010071066 +0000 UTC Remote: 2025-01-27 03:02:20.927108919 +0000 UTC m=+20.729857739 (delta=82.962147ms)
	I0127 03:02:21.046536 1121411 fix.go:200] guest clock delta is within tolerance: 82.962147ms
	I0127 03:02:21.046543 1121411 start.go:83] releasing machines lock for "newest-cni-642127", held for 20.699275534s
	I0127 03:02:21.046580 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:21.046929 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetIP
	I0127 03:02:21.050101 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:21.050549 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:21.050572 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:21.050930 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:21.051682 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:21.051910 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:21.052040 1121411 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 03:02:21.052128 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:21.052184 1121411 ssh_runner.go:195] Run: cat /version.json
	I0127 03:02:21.052219 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:21.055762 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:21.055836 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:21.056356 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:21.056389 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:21.056429 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:21.056447 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:21.056720 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
	I0127 03:02:21.056899 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:21.056974 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
	I0127 03:02:21.057099 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:21.057147 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
	I0127 03:02:21.057303 1121411 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa Username:docker}
	I0127 03:02:21.057708 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
	I0127 03:02:21.057902 1121411 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa Username:docker}
	I0127 03:02:21.169709 1121411 ssh_runner.go:195] Run: systemctl --version
	I0127 03:02:21.177622 1121411 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 03:02:21.184029 1121411 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 03:02:21.184112 1121411 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 03:02:21.202861 1121411 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 03:02:21.202890 1121411 start.go:495] detecting cgroup driver to use...
	I0127 03:02:21.202967 1121411 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 03:02:21.236110 1121411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 03:02:21.250683 1121411 docker.go:217] disabling cri-docker service (if available) ...
	I0127 03:02:21.250796 1121411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 03:02:21.266354 1121411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 03:02:21.284146 1121411 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 03:02:21.436406 1121411 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 03:02:21.620560 1121411 docker.go:233] disabling docker service ...
	I0127 03:02:21.620655 1121411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 03:02:21.639534 1121411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 03:02:21.657179 1121411 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 03:02:21.828676 1121411 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 03:02:21.993891 1121411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 03:02:22.011124 1121411 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 03:02:22.037734 1121411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 03:02:22.049863 1121411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 03:02:22.064327 1121411 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 03:02:22.064427 1121411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 03:02:22.080328 1121411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 03:02:22.093806 1121411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 03:02:22.106165 1121411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 03:02:22.117782 1121411 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 03:02:22.129650 1121411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 03:02:22.152872 1121411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 03:02:22.165020 1121411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 03:02:22.177918 1121411 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 03:02:22.188259 1121411 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 03:02:22.188355 1121411 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 03:02:22.204350 1121411 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 03:02:22.218093 1121411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:02:22.356619 1121411 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 03:02:22.385087 1121411 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0127 03:02:22.385172 1121411 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 03:02:22.389980 1121411 retry.go:31] will retry after 758.524819ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0127 03:02:23.148722 1121411 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 03:02:23.154533 1121411 start.go:563] Will wait 60s for crictl version
	I0127 03:02:23.154611 1121411 ssh_runner.go:195] Run: which crictl
	I0127 03:02:23.159040 1121411 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 03:02:23.200478 1121411 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0127 03:02:23.200579 1121411 ssh_runner.go:195] Run: containerd --version
	I0127 03:02:23.228424 1121411 ssh_runner.go:195] Run: containerd --version
	I0127 03:02:23.265392 1121411 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	I0127 03:02:23.266856 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetIP
	I0127 03:02:23.269741 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:23.270196 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:23.270231 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:23.270441 1121411 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0127 03:02:23.275461 1121411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 03:02:23.294081 1121411 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0127 03:02:21.866190 1119263 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.080241643s)
	I0127 03:02:21.866293 1119263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 03:02:21.886667 1119263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 03:02:21.901554 1119263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 03:02:21.915270 1119263 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 03:02:21.915296 1119263 kubeadm.go:157] found existing configuration files:
	
	I0127 03:02:21.915369 1119263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 03:02:21.929169 1119263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 03:02:21.929294 1119263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 03:02:21.942913 1119263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 03:02:21.956444 1119263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 03:02:21.956522 1119263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 03:02:21.970342 1119263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 03:02:21.989145 1119263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 03:02:21.989232 1119263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 03:02:22.001913 1119263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 03:02:22.013198 1119263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 03:02:22.013270 1119263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 03:02:22.026131 1119263 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 03:02:22.226370 1119263 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 03:02:20.601947 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:22.605621 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:23.295574 1121411 kubeadm.go:883] updating cluster {Name:newest-cni-642127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-642127 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 03:02:23.295756 1121411 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 03:02:23.295841 1121411 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 03:02:23.331579 1121411 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 03:02:23.331604 1121411 containerd.go:534] Images already preloaded, skipping extraction
	I0127 03:02:23.331661 1121411 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 03:02:23.368818 1121411 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 03:02:23.368848 1121411 cache_images.go:84] Images are preloaded, skipping loading
	I0127 03:02:23.368856 1121411 kubeadm.go:934] updating node { 192.168.50.51 8443 v1.32.1 containerd true true} ...
	I0127 03:02:23.369012 1121411 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-642127 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:newest-cni-642127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 03:02:23.369101 1121411 ssh_runner.go:195] Run: sudo crictl info
	I0127 03:02:23.405913 1121411 cni.go:84] Creating CNI manager for ""
	I0127 03:02:23.405949 1121411 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 03:02:23.405966 1121411 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0127 03:02:23.406015 1121411 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.51 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-642127 NodeName:newest-cni-642127 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 03:02:23.406210 1121411 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-642127"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.51"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.51"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 03:02:23.406291 1121411 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 03:02:23.418253 1121411 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 03:02:23.418339 1121411 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 03:02:23.431397 1121411 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0127 03:02:23.452908 1121411 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 03:02:23.474059 1121411 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I0127 03:02:23.494976 1121411 ssh_runner.go:195] Run: grep 192.168.50.51	control-plane.minikube.internal$ /etc/hosts
	I0127 03:02:23.499246 1121411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 03:02:23.512541 1121411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:02:23.648564 1121411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 03:02:23.667204 1121411 certs.go:68] Setting up /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127 for IP: 192.168.50.51
	I0127 03:02:23.667230 1121411 certs.go:194] generating shared ca certs ...
	I0127 03:02:23.667265 1121411 certs.go:226] acquiring lock for ca certs: {Name:mk567acc23cbe907605c03a2ec03c8e4859e8343 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:23.667447 1121411 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.key
	I0127 03:02:23.667526 1121411 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/proxy-client-ca.key
	I0127 03:02:23.667540 1121411 certs.go:256] generating profile certs ...
	I0127 03:02:23.667681 1121411 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127/client.key
	I0127 03:02:23.667777 1121411 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127/apiserver.key.fe27a200
	I0127 03:02:23.667863 1121411 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127/proxy-client.key
	I0127 03:02:23.668017 1121411 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/1064439.pem (1338 bytes)
	W0127 03:02:23.668071 1121411 certs.go:480] ignoring /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/1064439_empty.pem, impossibly tiny 0 bytes
	I0127 03:02:23.668085 1121411 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 03:02:23.668115 1121411 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem (1078 bytes)
	I0127 03:02:23.668143 1121411 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/cert.pem (1123 bytes)
	I0127 03:02:23.668177 1121411 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/key.pem (1675 bytes)
	I0127 03:02:23.668261 1121411 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/files/etc/ssl/certs/10644392.pem (1708 bytes)
	I0127 03:02:23.669195 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 03:02:23.715219 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 03:02:23.757555 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 03:02:23.797303 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 03:02:23.839764 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0127 03:02:23.889721 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 03:02:23.923393 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 03:02:23.953947 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 03:02:23.983760 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 03:02:24.016899 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/1064439.pem --> /usr/share/ca-certificates/1064439.pem (1338 bytes)
	I0127 03:02:24.060186 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/files/etc/ssl/certs/10644392.pem --> /usr/share/ca-certificates/10644392.pem (1708 bytes)
	I0127 03:02:24.099215 1121411 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 03:02:24.120841 1121411 ssh_runner.go:195] Run: openssl version
	I0127 03:02:24.127163 1121411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 03:02:24.139725 1121411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:02:24.144911 1121411 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 01:43 /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:02:24.145000 1121411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:02:24.153545 1121411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 03:02:24.167817 1121411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1064439.pem && ln -fs /usr/share/ca-certificates/1064439.pem /etc/ssl/certs/1064439.pem"
	I0127 03:02:24.182019 1121411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1064439.pem
	I0127 03:02:24.188811 1121411 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 01:51 /usr/share/ca-certificates/1064439.pem
	I0127 03:02:24.188883 1121411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1064439.pem
	I0127 03:02:24.196999 1121411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1064439.pem /etc/ssl/certs/51391683.0"
	I0127 03:02:24.209518 1121411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10644392.pem && ln -fs /usr/share/ca-certificates/10644392.pem /etc/ssl/certs/10644392.pem"
	I0127 03:02:24.221497 1121411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10644392.pem
	I0127 03:02:24.226538 1121411 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 01:51 /usr/share/ca-certificates/10644392.pem
	I0127 03:02:24.226618 1121411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10644392.pem
	I0127 03:02:24.233572 1121411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10644392.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 03:02:24.245296 1121411 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 03:02:24.250242 1121411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 03:02:24.256818 1121411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 03:02:24.264939 1121411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 03:02:24.272818 1121411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 03:02:24.280734 1121411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 03:02:24.289169 1121411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 03:02:24.296827 1121411 kubeadm.go:392] StartCluster: {Name:newest-cni-642127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-642127 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: S
ubnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 03:02:24.297003 1121411 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0127 03:02:24.297095 1121411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 03:02:24.345692 1121411 cri.go:89] found id: "a6a8a2ba8bcb96c16e32fee70afcff2fb691cfba8dc6b1d17319c1af0fb57e5c"
	I0127 03:02:24.345721 1121411 cri.go:89] found id: "2875317dc160c0cbfcb5f0fffa03054fa2f58e5ac3a8c285da1e902b27ff47ba"
	I0127 03:02:24.345726 1121411 cri.go:89] found id: "1cf120eb9e1f79c0f94b2450693e0ddb3e2be97f570ea7c1bd076d78e161f63f"
	I0127 03:02:24.345731 1121411 cri.go:89] found id: "7ed46ffcaf84e6803aa42840b6d1ad2e881baaab16a17a4ce2b4937e53de42cc"
	I0127 03:02:24.345736 1121411 cri.go:89] found id: "f7635245edb3222b276ec6cb742d2e37ae2d21613eef3d959d8e42317a2e1c71"
	I0127 03:02:24.345741 1121411 cri.go:89] found id: "6190f3df5366129319feab6d40d56f4b615cb6f059c4b8e91512bbd1b3943c19"
	I0127 03:02:24.345745 1121411 cri.go:89] found id: "22958da5ca6d5bc9ed8ce5b964ecf90f4ffa68a09d4b9760a64cb0233948db0a"
	I0127 03:02:24.345749 1121411 cri.go:89] found id: "6608f968238d3b18e99b5ae9b674c1a64c96b2d7b63769f917c4edc895804df3"
	I0127 03:02:24.345753 1121411 cri.go:89] found id: ""
	I0127 03:02:24.345806 1121411 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0127 03:02:24.363134 1121411 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-27T03:02:24Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0127 03:02:24.363233 1121411 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 03:02:24.377414 1121411 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 03:02:24.377441 1121411 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 03:02:24.377512 1121411 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 03:02:24.391116 1121411 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 03:02:24.392658 1121411 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-642127" does not appear in /home/jenkins/minikube-integration/20316-1057178/kubeconfig
	I0127 03:02:24.393662 1121411 kubeconfig.go:62] /home/jenkins/minikube-integration/20316-1057178/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-642127" cluster setting kubeconfig missing "newest-cni-642127" context setting]
	I0127 03:02:24.395074 1121411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-1057178/kubeconfig: {Name:mke4bd9fc891569e5d6830fdf173fa5043f6c0a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:24.406122 1121411 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 03:02:24.417412 1121411 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.51
	I0127 03:02:24.417457 1121411 kubeadm.go:1160] stopping kube-system containers ...
	I0127 03:02:24.417475 1121411 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0127 03:02:24.417545 1121411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 03:02:24.459011 1121411 cri.go:89] found id: "a6a8a2ba8bcb96c16e32fee70afcff2fb691cfba8dc6b1d17319c1af0fb57e5c"
	I0127 03:02:24.459043 1121411 cri.go:89] found id: "2875317dc160c0cbfcb5f0fffa03054fa2f58e5ac3a8c285da1e902b27ff47ba"
	I0127 03:02:24.459049 1121411 cri.go:89] found id: "1cf120eb9e1f79c0f94b2450693e0ddb3e2be97f570ea7c1bd076d78e161f63f"
	I0127 03:02:24.459055 1121411 cri.go:89] found id: "7ed46ffcaf84e6803aa42840b6d1ad2e881baaab16a17a4ce2b4937e53de42cc"
	I0127 03:02:24.459059 1121411 cri.go:89] found id: "f7635245edb3222b276ec6cb742d2e37ae2d21613eef3d959d8e42317a2e1c71"
	I0127 03:02:24.459065 1121411 cri.go:89] found id: "6190f3df5366129319feab6d40d56f4b615cb6f059c4b8e91512bbd1b3943c19"
	I0127 03:02:24.459069 1121411 cri.go:89] found id: "22958da5ca6d5bc9ed8ce5b964ecf90f4ffa68a09d4b9760a64cb0233948db0a"
	I0127 03:02:24.459074 1121411 cri.go:89] found id: "6608f968238d3b18e99b5ae9b674c1a64c96b2d7b63769f917c4edc895804df3"
	I0127 03:02:24.459079 1121411 cri.go:89] found id: ""
	I0127 03:02:24.459085 1121411 cri.go:252] Stopping containers: [a6a8a2ba8bcb96c16e32fee70afcff2fb691cfba8dc6b1d17319c1af0fb57e5c 2875317dc160c0cbfcb5f0fffa03054fa2f58e5ac3a8c285da1e902b27ff47ba 1cf120eb9e1f79c0f94b2450693e0ddb3e2be97f570ea7c1bd076d78e161f63f 7ed46ffcaf84e6803aa42840b6d1ad2e881baaab16a17a4ce2b4937e53de42cc f7635245edb3222b276ec6cb742d2e37ae2d21613eef3d959d8e42317a2e1c71 6190f3df5366129319feab6d40d56f4b615cb6f059c4b8e91512bbd1b3943c19 22958da5ca6d5bc9ed8ce5b964ecf90f4ffa68a09d4b9760a64cb0233948db0a 6608f968238d3b18e99b5ae9b674c1a64c96b2d7b63769f917c4edc895804df3]
	I0127 03:02:24.459142 1121411 ssh_runner.go:195] Run: which crictl
	I0127 03:02:24.463700 1121411 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 a6a8a2ba8bcb96c16e32fee70afcff2fb691cfba8dc6b1d17319c1af0fb57e5c 2875317dc160c0cbfcb5f0fffa03054fa2f58e5ac3a8c285da1e902b27ff47ba 1cf120eb9e1f79c0f94b2450693e0ddb3e2be97f570ea7c1bd076d78e161f63f 7ed46ffcaf84e6803aa42840b6d1ad2e881baaab16a17a4ce2b4937e53de42cc f7635245edb3222b276ec6cb742d2e37ae2d21613eef3d959d8e42317a2e1c71 6190f3df5366129319feab6d40d56f4b615cb6f059c4b8e91512bbd1b3943c19 22958da5ca6d5bc9ed8ce5b964ecf90f4ffa68a09d4b9760a64cb0233948db0a 6608f968238d3b18e99b5ae9b674c1a64c96b2d7b63769f917c4edc895804df3
	I0127 03:02:24.514136 1121411 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 03:02:24.533173 1121411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 03:02:24.546127 1121411 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 03:02:24.546153 1121411 kubeadm.go:157] found existing configuration files:
	
	I0127 03:02:24.546208 1121411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 03:02:24.557350 1121411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 03:02:24.557425 1121411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 03:02:24.568241 1121411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 03:02:24.579187 1121411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 03:02:24.579283 1121411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 03:02:24.590554 1121411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 03:02:24.603551 1121411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 03:02:24.603617 1121411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 03:02:24.617395 1121411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 03:02:24.630452 1121411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 03:02:24.630532 1121411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 03:02:24.642268 1121411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 03:02:24.652281 1121411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 03:02:24.829811 1121411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 03:02:23.282142 1119007 pod_ready.go:103] pod "coredns-668d6bf9bc-86j6q" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:25.286311 1119007 pod_ready.go:93] pod "coredns-668d6bf9bc-86j6q" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:25.286348 1119007 pod_ready.go:82] duration metric: took 9.012019717s for pod "coredns-668d6bf9bc-86j6q" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.286363 1119007 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-fk8cw" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.296155 1119007 pod_ready.go:93] pod "coredns-668d6bf9bc-fk8cw" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:25.296266 1119007 pod_ready.go:82] duration metric: took 9.891475ms for pod "coredns-668d6bf9bc-fk8cw" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.296304 1119007 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.306424 1119007 pod_ready.go:93] pod "etcd-no-preload-887091" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:25.306520 1119007 pod_ready.go:82] duration metric: took 10.178061ms for pod "etcd-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.306550 1119007 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.316320 1119007 pod_ready.go:93] pod "kube-apiserver-no-preload-887091" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:25.316353 1119007 pod_ready.go:82] duration metric: took 9.779811ms for pod "kube-apiserver-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.316368 1119007 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.324972 1119007 pod_ready.go:93] pod "kube-controller-manager-no-preload-887091" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:25.324998 1119007 pod_ready.go:82] duration metric: took 8.620263ms for pod "kube-controller-manager-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.325011 1119007 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-45pz6" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.682761 1119007 pod_ready.go:93] pod "kube-proxy-45pz6" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:25.682792 1119007 pod_ready.go:82] duration metric: took 357.773408ms for pod "kube-proxy-45pz6" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.682807 1119007 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:26.086323 1119007 pod_ready.go:93] pod "kube-scheduler-no-preload-887091" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:26.086365 1119007 pod_ready.go:82] duration metric: took 403.548355ms for pod "kube-scheduler-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:26.086378 1119007 pod_ready.go:39] duration metric: took 9.839373235s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:02:26.086398 1119007 api_server.go:52] waiting for apiserver process to appear ...
	I0127 03:02:26.086493 1119007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:26.115441 1119007 api_server.go:72] duration metric: took 10.186729821s to wait for apiserver process to appear ...
	I0127 03:02:26.115474 1119007 api_server.go:88] waiting for apiserver healthz status ...
	I0127 03:02:26.115503 1119007 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0127 03:02:26.125822 1119007 api_server.go:279] https://192.168.61.201:8443/healthz returned 200:
	ok
	I0127 03:02:26.127247 1119007 api_server.go:141] control plane version: v1.32.1
	I0127 03:02:26.127277 1119007 api_server.go:131] duration metric: took 11.792506ms to wait for apiserver health ...
	I0127 03:02:26.127289 1119007 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 03:02:26.285021 1119007 system_pods.go:59] 9 kube-system pods found
	I0127 03:02:26.285059 1119007 system_pods.go:61] "coredns-668d6bf9bc-86j6q" [9b85ae79-ae19-4cd1-a0da-0343c9e2801c] Running
	I0127 03:02:26.285067 1119007 system_pods.go:61] "coredns-668d6bf9bc-fk8cw" [c7075b92-233d-4a5a-b864-ef349d7125e7] Running
	I0127 03:02:26.285073 1119007 system_pods.go:61] "etcd-no-preload-887091" [45d4a5fc-797f-4d4a-9204-049ebcdc5647] Running
	I0127 03:02:26.285079 1119007 system_pods.go:61] "kube-apiserver-no-preload-887091" [11e7ea14-678a-408f-a722-8fedb984c086] Running
	I0127 03:02:26.285085 1119007 system_pods.go:61] "kube-controller-manager-no-preload-887091" [95d63381-33aa-428b-80b1-6e8ccf96b8a1] Running
	I0127 03:02:26.285089 1119007 system_pods.go:61] "kube-proxy-45pz6" [b3aa986f-d6d8-4050-8760-438aabd39bdc] Running
	I0127 03:02:26.285094 1119007 system_pods.go:61] "kube-scheduler-no-preload-887091" [5065d24f-256d-43ad-bd00-1d5868b7214d] Running
	I0127 03:02:26.285104 1119007 system_pods.go:61] "metrics-server-f79f97bbb-vshg4" [33ae36ed-d8a4-4d60-bcd0-1becf2d490bc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 03:02:26.285110 1119007 system_pods.go:61] "storage-provisioner" [127a1f13-b70c-4482-bd8b-14a6bf24b663] Running
	I0127 03:02:26.285121 1119007 system_pods.go:74] duration metric: took 157.824017ms to wait for pod list to return data ...
	I0127 03:02:26.285134 1119007 default_sa.go:34] waiting for default service account to be created ...
	I0127 03:02:26.480092 1119007 default_sa.go:45] found service account: "default"
	I0127 03:02:26.480128 1119007 default_sa.go:55] duration metric: took 194.984911ms for default service account to be created ...
	I0127 03:02:26.480141 1119007 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 03:02:26.688727 1119007 system_pods.go:87] 9 kube-system pods found
	I0127 03:02:25.099839 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:27.100451 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:29.599652 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:26.158504 1121411 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.328648156s)
	I0127 03:02:26.158550 1121411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 03:02:26.404894 1121411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 03:02:26.526530 1121411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 03:02:26.667432 1121411 api_server.go:52] waiting for apiserver process to appear ...
	I0127 03:02:26.667635 1121411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:27.167965 1121411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:27.667769 1121411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:27.702851 1121411 api_server.go:72] duration metric: took 1.03541528s to wait for apiserver process to appear ...
	I0127 03:02:27.702957 1121411 api_server.go:88] waiting for apiserver healthz status ...
	I0127 03:02:27.702996 1121411 api_server.go:253] Checking apiserver healthz at https://192.168.50.51:8443/healthz ...
	I0127 03:02:27.703762 1121411 api_server.go:269] stopped: https://192.168.50.51:8443/healthz: Get "https://192.168.50.51:8443/healthz": dial tcp 192.168.50.51:8443: connect: connection refused
	I0127 03:02:28.203377 1121411 api_server.go:253] Checking apiserver healthz at https://192.168.50.51:8443/healthz ...
	I0127 03:02:28.204135 1121411 api_server.go:269] stopped: https://192.168.50.51:8443/healthz: Get "https://192.168.50.51:8443/healthz": dial tcp 192.168.50.51:8443: connect: connection refused
	I0127 03:02:28.703884 1121411 api_server.go:253] Checking apiserver healthz at https://192.168.50.51:8443/healthz ...
	I0127 03:02:32.408333 1119263 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 03:02:32.408420 1119263 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 03:02:32.408564 1119263 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 03:02:32.408723 1119263 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 03:02:32.408850 1119263 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 03:02:32.408936 1119263 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 03:02:32.410600 1119263 out.go:235]   - Generating certificates and keys ...
	I0127 03:02:32.410694 1119263 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 03:02:32.410784 1119263 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 03:02:32.410899 1119263 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 03:02:32.410985 1119263 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 03:02:32.411061 1119263 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 03:02:32.411144 1119263 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 03:02:32.411243 1119263 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 03:02:32.411349 1119263 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 03:02:32.411474 1119263 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 03:02:32.411592 1119263 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 03:02:32.411654 1119263 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 03:02:32.411755 1119263 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 03:02:32.411823 1119263 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 03:02:32.411900 1119263 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 03:02:32.411957 1119263 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 03:02:32.412019 1119263 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 03:02:32.412077 1119263 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 03:02:32.412166 1119263 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 03:02:32.412460 1119263 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 03:02:32.415088 1119263 out.go:235]   - Booting up control plane ...
	I0127 03:02:32.415215 1119263 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 03:02:32.415349 1119263 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 03:02:32.415444 1119263 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 03:02:32.415597 1119263 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 03:02:32.415722 1119263 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 03:02:32.415772 1119263 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 03:02:32.415934 1119263 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 03:02:32.416041 1119263 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 03:02:32.416113 1119263 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001709036s
	I0127 03:02:32.416228 1119263 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 03:02:32.416326 1119263 kubeadm.go:310] [api-check] The API server is healthy after 6.003070171s
	I0127 03:02:32.416466 1119263 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 03:02:32.416619 1119263 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 03:02:32.416691 1119263 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 03:02:32.416890 1119263 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-264552 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 03:02:32.416990 1119263 kubeadm.go:310] [bootstrap-token] Using token: glfh41.djplehex31d2nmyn
	I0127 03:02:32.418322 1119263 out.go:235]   - Configuring RBAC rules ...
	I0127 03:02:32.418468 1119263 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 03:02:32.418553 1119263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 03:02:32.418749 1119263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 03:02:32.418932 1119263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 03:02:32.419089 1119263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 03:02:32.419214 1119263 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 03:02:32.419378 1119263 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 03:02:32.419436 1119263 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 03:02:32.419498 1119263 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 03:02:32.419505 1119263 kubeadm.go:310] 
	I0127 03:02:32.419581 1119263 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 03:02:32.419587 1119263 kubeadm.go:310] 
	I0127 03:02:32.419686 1119263 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 03:02:32.419696 1119263 kubeadm.go:310] 
	I0127 03:02:32.419729 1119263 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 03:02:32.419809 1119263 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 03:02:32.419880 1119263 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 03:02:32.419891 1119263 kubeadm.go:310] 
	I0127 03:02:32.419987 1119263 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 03:02:32.419998 1119263 kubeadm.go:310] 
	I0127 03:02:32.420067 1119263 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 03:02:32.420078 1119263 kubeadm.go:310] 
	I0127 03:02:32.420143 1119263 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 03:02:32.420236 1119263 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 03:02:32.420319 1119263 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 03:02:32.420330 1119263 kubeadm.go:310] 
	I0127 03:02:32.420421 1119263 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 03:02:32.420508 1119263 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 03:02:32.420519 1119263 kubeadm.go:310] 
	I0127 03:02:32.420616 1119263 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token glfh41.djplehex31d2nmyn \
	I0127 03:02:32.420750 1119263 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0bba8d4e4b3afb129d2d18e2e045cd48b3419c300ae73ce15b73c31a6c21b1ba \
	I0127 03:02:32.420781 1119263 kubeadm.go:310] 	--control-plane 
	I0127 03:02:32.420790 1119263 kubeadm.go:310] 
	I0127 03:02:32.420891 1119263 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 03:02:32.420902 1119263 kubeadm.go:310] 
	I0127 03:02:32.421036 1119263 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token glfh41.djplehex31d2nmyn \
	I0127 03:02:32.421192 1119263 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0bba8d4e4b3afb129d2d18e2e045cd48b3419c300ae73ce15b73c31a6c21b1ba 
	I0127 03:02:32.421210 1119263 cni.go:84] Creating CNI manager for ""
	I0127 03:02:32.421220 1119263 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 03:02:32.422542 1119263 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 03:02:30.820769 1121411 api_server.go:279] https://192.168.50.51:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 03:02:30.820809 1121411 api_server.go:103] status: https://192.168.50.51:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 03:02:30.820827 1121411 api_server.go:253] Checking apiserver healthz at https://192.168.50.51:8443/healthz ...
	I0127 03:02:30.840404 1121411 api_server.go:279] https://192.168.50.51:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 03:02:30.840436 1121411 api_server.go:103] status: https://192.168.50.51:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 03:02:31.203948 1121411 api_server.go:253] Checking apiserver healthz at https://192.168.50.51:8443/healthz ...
	I0127 03:02:31.209795 1121411 api_server.go:279] https://192.168.50.51:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 03:02:31.209820 1121411 api_server.go:103] status: https://192.168.50.51:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 03:02:31.703217 1121411 api_server.go:253] Checking apiserver healthz at https://192.168.50.51:8443/healthz ...
	I0127 03:02:31.724822 1121411 api_server.go:279] https://192.168.50.51:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 03:02:31.724862 1121411 api_server.go:103] status: https://192.168.50.51:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 03:02:32.203446 1121411 api_server.go:253] Checking apiserver healthz at https://192.168.50.51:8443/healthz ...
	I0127 03:02:32.210068 1121411 api_server.go:279] https://192.168.50.51:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 03:02:32.210100 1121411 api_server.go:103] status: https://192.168.50.51:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 03:02:32.703717 1121411 api_server.go:253] Checking apiserver healthz at https://192.168.50.51:8443/healthz ...
	I0127 03:02:32.709016 1121411 api_server.go:279] https://192.168.50.51:8443/healthz returned 200:
	ok
	I0127 03:02:32.719003 1121411 api_server.go:141] control plane version: v1.32.1
	I0127 03:02:32.719041 1121411 api_server.go:131] duration metric: took 5.016063652s to wait for apiserver health ...
	I0127 03:02:32.719055 1121411 cni.go:84] Creating CNI manager for ""
	I0127 03:02:32.719065 1121411 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 03:02:32.721101 1121411 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 03:02:32.722433 1121411 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 03:02:32.734857 1121411 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 03:02:32.761120 1121411 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 03:02:32.778500 1121411 system_pods.go:59] 9 kube-system pods found
	I0127 03:02:32.778547 1121411 system_pods.go:61] "coredns-668d6bf9bc-dscrm" [2869a26b-4522-43cd-8417-abc17b77dc7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 03:02:32.778558 1121411 system_pods.go:61] "coredns-668d6bf9bc-rcdv8" [7697dd25-c99a-4413-a242-54cca1d1e5e9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 03:02:32.778571 1121411 system_pods.go:61] "etcd-newest-cni-642127" [816ba553-68cb-4496-8dba-7839e9799916] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 03:02:32.778583 1121411 system_pods.go:61] "kube-apiserver-newest-cni-642127" [69c55a7c-148b-40ff-86ec-739c5f668a11] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 03:02:32.778596 1121411 system_pods.go:61] "kube-controller-manager-newest-cni-642127" [1b6d1085-c4fe-43f2-a9ab-320adeb6cd38] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 03:02:32.778608 1121411 system_pods.go:61] "kube-proxy-5q7mp" [1efd4424-3475-45e1-b80b-c941de90e34d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0127 03:02:32.778620 1121411 system_pods.go:61] "kube-scheduler-newest-cni-642127" [f1c81d74-2818-4093-9e69-19359cc3ff50] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 03:02:32.778631 1121411 system_pods.go:61] "metrics-server-f79f97bbb-47hqq" [7f6ccb13-e73f-4514-a639-e1297b545cf6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 03:02:32.778642 1121411 system_pods.go:61] "storage-provisioner" [ee06d1e8-0ae7-42c7-9c5b-d19fcfb83f40] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 03:02:32.778653 1121411 system_pods.go:74] duration metric: took 17.501517ms to wait for pod list to return data ...
	I0127 03:02:32.778667 1121411 node_conditions.go:102] verifying NodePressure condition ...
	I0127 03:02:32.783164 1121411 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 03:02:32.783201 1121411 node_conditions.go:123] node cpu capacity is 2
	I0127 03:02:32.783216 1121411 node_conditions.go:105] duration metric: took 4.539816ms to run NodePressure ...
	I0127 03:02:32.783239 1121411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 03:02:33.135340 1121411 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 03:02:33.148690 1121411 ops.go:34] apiserver oom_adj: -16
	I0127 03:02:33.148723 1121411 kubeadm.go:597] duration metric: took 8.771274475s to restartPrimaryControlPlane
	I0127 03:02:33.148739 1121411 kubeadm.go:394] duration metric: took 8.851928105s to StartCluster
	I0127 03:02:33.148766 1121411 settings.go:142] acquiring lock: {Name:mkfac79776d8549aa482287d1af528efdec15d32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:33.148862 1121411 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20316-1057178/kubeconfig
	I0127 03:02:33.150733 1121411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-1057178/kubeconfig: {Name:mke4bd9fc891569e5d6830fdf173fa5043f6c0a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:33.150984 1121411 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 03:02:33.151079 1121411 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 03:02:33.151202 1121411 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-642127"
	I0127 03:02:33.151222 1121411 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-642127"
	W0127 03:02:33.151238 1121411 addons.go:247] addon storage-provisioner should already be in state true
	I0127 03:02:33.151257 1121411 addons.go:69] Setting metrics-server=true in profile "newest-cni-642127"
	I0127 03:02:33.151258 1121411 addons.go:69] Setting default-storageclass=true in profile "newest-cni-642127"
	I0127 03:02:33.151284 1121411 addons.go:238] Setting addon metrics-server=true in "newest-cni-642127"
	I0127 03:02:33.151272 1121411 addons.go:69] Setting dashboard=true in profile "newest-cni-642127"
	W0127 03:02:33.151294 1121411 addons.go:247] addon metrics-server should already be in state true
	I0127 03:02:33.151294 1121411 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-642127"
	I0127 03:02:33.151315 1121411 addons.go:238] Setting addon dashboard=true in "newest-cni-642127"
	I0127 03:02:33.151313 1121411 config.go:182] Loaded profile config "newest-cni-642127": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	W0127 03:02:33.151325 1121411 addons.go:247] addon dashboard should already be in state true
	I0127 03:02:33.151330 1121411 host.go:66] Checking if "newest-cni-642127" exists ...
	I0127 03:02:33.151355 1121411 host.go:66] Checking if "newest-cni-642127" exists ...
	I0127 03:02:33.151285 1121411 host.go:66] Checking if "newest-cni-642127" exists ...
	I0127 03:02:33.151717 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:33.151747 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:33.151754 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:33.151760 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:33.151789 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:33.151793 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:33.151825 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:33.151865 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:33.152612 1121411 out.go:177] * Verifying Kubernetes components...
	I0127 03:02:33.154050 1121411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:02:33.169429 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33289
	I0127 03:02:33.169982 1121411 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:33.170451 1121411 main.go:141] libmachine: Using API Version  1
	I0127 03:02:33.170472 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:33.170815 1121411 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:33.171371 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36587
	I0127 03:02:33.171487 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:33.171528 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:33.171747 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39453
	I0127 03:02:33.171942 1121411 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:33.172289 1121411 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:33.172471 1121411 main.go:141] libmachine: Using API Version  1
	I0127 03:02:33.172498 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:33.172746 1121411 main.go:141] libmachine: Using API Version  1
	I0127 03:02:33.172766 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:33.172908 1121411 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:33.174172 1121411 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:33.174237 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetState
	I0127 03:02:33.175157 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45067
	I0127 03:02:33.175572 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:33.175616 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:33.175822 1121411 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:33.177792 1121411 addons.go:238] Setting addon default-storageclass=true in "newest-cni-642127"
	W0127 03:02:33.177817 1121411 addons.go:247] addon default-storageclass should already be in state true
	I0127 03:02:33.177848 1121411 host.go:66] Checking if "newest-cni-642127" exists ...
	I0127 03:02:33.178206 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:33.178256 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:33.178862 1121411 main.go:141] libmachine: Using API Version  1
	I0127 03:02:33.178892 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:33.179421 1121411 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:33.192581 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38383
	I0127 03:02:33.193097 1121411 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:33.193643 1121411 main.go:141] libmachine: Using API Version  1
	I0127 03:02:33.193668 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:33.194026 1121411 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:33.194248 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetState
	I0127 03:02:33.197497 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:33.199029 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36881
	I0127 03:02:33.199688 1121411 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:33.199789 1121411 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 03:02:33.200189 1121411 main.go:141] libmachine: Using API Version  1
	I0127 03:02:33.200217 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:33.200630 1121411 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:33.200826 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetState
	I0127 03:02:33.201177 1121411 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:02:33.201196 1121411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 03:02:33.201215 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:33.201773 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:33.201821 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:33.203099 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:33.204646 1121411 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 03:02:33.205709 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:33.206717 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:33.206782 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:33.207074 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
	I0127 03:02:33.207272 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:33.207453 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
	I0127 03:02:33.207613 1121411 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa Username:docker}
	I0127 03:02:33.208044 1121411 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 03:02:33.209101 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 03:02:33.209120 1121411 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 03:02:33.209140 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:33.212709 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:33.213133 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:33.213153 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:33.213451 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
	I0127 03:02:33.213632 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:33.213734 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
	I0127 03:02:33.213819 1121411 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa Username:docker}
	I0127 03:02:33.219861 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43957
	I0127 03:02:33.220403 1121411 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:33.220991 1121411 main.go:141] libmachine: Using API Version  1
	I0127 03:02:33.221024 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:33.221408 1121411 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:33.222196 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:33.222254 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:33.223731 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42359
	I0127 03:02:33.224051 1121411 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:33.224552 1121411 main.go:141] libmachine: Using API Version  1
	I0127 03:02:33.224573 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:33.224816 1121411 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:33.225077 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetState
	I0127 03:02:33.227906 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:33.229635 1121411 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 03:02:32.423722 1119263 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 03:02:32.436568 1119263 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 03:02:32.461950 1119263 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 03:02:32.462072 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:32.462109 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-264552 minikube.k8s.io/updated_at=2025_01_27T03_02_32_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95 minikube.k8s.io/name=embed-certs-264552 minikube.k8s.io/primary=true
	I0127 03:02:32.477721 1119263 ops.go:34] apiserver oom_adj: -16
	I0127 03:02:32.739220 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:33.239786 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:33.740039 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:34.239291 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:34.740312 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:31.600099 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:33.600177 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:33.231071 1121411 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 03:02:33.231090 1121411 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 03:02:33.231112 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:33.233979 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:33.234359 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:33.234412 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:33.234633 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
	I0127 03:02:33.234777 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:33.234927 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
	I0127 03:02:33.235147 1121411 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa Username:docker}
	I0127 03:02:33.243914 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40181
	I0127 03:02:33.244332 1121411 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:33.244875 1121411 main.go:141] libmachine: Using API Version  1
	I0127 03:02:33.244889 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:33.245272 1121411 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:33.245443 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetState
	I0127 03:02:33.247204 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:33.247418 1121411 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 03:02:33.247429 1121411 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 03:02:33.247455 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:33.250553 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:33.251030 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:33.251045 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:33.251208 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
	I0127 03:02:33.251359 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:33.251505 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
	I0127 03:02:33.251611 1121411 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa Username:docker}
	I0127 03:02:33.375505 1121411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 03:02:33.394405 1121411 api_server.go:52] waiting for apiserver process to appear ...
	I0127 03:02:33.394507 1121411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:33.410947 1121411 api_server.go:72] duration metric: took 259.928237ms to wait for apiserver process to appear ...
	I0127 03:02:33.410983 1121411 api_server.go:88] waiting for apiserver healthz status ...
	I0127 03:02:33.411005 1121411 api_server.go:253] Checking apiserver healthz at https://192.168.50.51:8443/healthz ...
	I0127 03:02:33.416758 1121411 api_server.go:279] https://192.168.50.51:8443/healthz returned 200:
	ok
	I0127 03:02:33.418367 1121411 api_server.go:141] control plane version: v1.32.1
	I0127 03:02:33.418395 1121411 api_server.go:131] duration metric: took 7.402525ms to wait for apiserver health ...
	I0127 03:02:33.418407 1121411 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 03:02:33.424893 1121411 system_pods.go:59] 9 kube-system pods found
	I0127 03:02:33.424921 1121411 system_pods.go:61] "coredns-668d6bf9bc-dscrm" [2869a26b-4522-43cd-8417-abc17b77dc7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 03:02:33.424928 1121411 system_pods.go:61] "coredns-668d6bf9bc-rcdv8" [7697dd25-c99a-4413-a242-54cca1d1e5e9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 03:02:33.424936 1121411 system_pods.go:61] "etcd-newest-cni-642127" [816ba553-68cb-4496-8dba-7839e9799916] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 03:02:33.424965 1121411 system_pods.go:61] "kube-apiserver-newest-cni-642127" [69c55a7c-148b-40ff-86ec-739c5f668a11] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 03:02:33.424984 1121411 system_pods.go:61] "kube-controller-manager-newest-cni-642127" [1b6d1085-c4fe-43f2-a9ab-320adeb6cd38] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 03:02:33.424992 1121411 system_pods.go:61] "kube-proxy-5q7mp" [1efd4424-3475-45e1-b80b-c941de90e34d] Running
	I0127 03:02:33.424997 1121411 system_pods.go:61] "kube-scheduler-newest-cni-642127" [f1c81d74-2818-4093-9e69-19359cc3ff50] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 03:02:33.425005 1121411 system_pods.go:61] "metrics-server-f79f97bbb-47hqq" [7f6ccb13-e73f-4514-a639-e1297b545cf6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 03:02:33.425009 1121411 system_pods.go:61] "storage-provisioner" [ee06d1e8-0ae7-42c7-9c5b-d19fcfb83f40] Running
	I0127 03:02:33.425017 1121411 system_pods.go:74] duration metric: took 6.604015ms to wait for pod list to return data ...
	I0127 03:02:33.425027 1121411 default_sa.go:34] waiting for default service account to be created ...
	I0127 03:02:33.427992 1121411 default_sa.go:45] found service account: "default"
	I0127 03:02:33.428016 1121411 default_sa.go:55] duration metric: took 2.981475ms for default service account to be created ...
	I0127 03:02:33.428030 1121411 kubeadm.go:582] duration metric: took 277.019922ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 03:02:33.428053 1121411 node_conditions.go:102] verifying NodePressure condition ...
	I0127 03:02:33.431283 1121411 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 03:02:33.431303 1121411 node_conditions.go:123] node cpu capacity is 2
	I0127 03:02:33.431313 1121411 node_conditions.go:105] duration metric: took 3.254985ms to run NodePressure ...
	I0127 03:02:33.431324 1121411 start.go:241] waiting for startup goroutines ...
	I0127 03:02:33.462238 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 03:02:33.462261 1121411 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 03:02:33.476129 1121411 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 03:02:33.476162 1121411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 03:02:33.488754 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 03:02:33.488789 1121411 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 03:02:33.497073 1121411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 03:02:33.519522 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 03:02:33.519557 1121411 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 03:02:33.551868 1121411 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 03:02:33.551905 1121411 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 03:02:33.565343 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 03:02:33.565374 1121411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 03:02:33.600695 1121411 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 03:02:33.600720 1121411 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 03:02:33.602150 1121411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:02:33.632660 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 03:02:33.632694 1121411 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 03:02:33.652690 1121411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 03:02:33.705754 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 03:02:33.705786 1121411 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 03:02:33.793208 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 03:02:33.793261 1121411 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 03:02:33.881849 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 03:02:33.881884 1121411 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 03:02:33.979510 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 03:02:33.979542 1121411 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 03:02:34.040605 1121411 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:34.040637 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Close
	I0127 03:02:34.041032 1121411 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:34.041080 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | Closing plugin on server side
	I0127 03:02:34.041090 1121411 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:34.041113 1121411 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:34.041137 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Close
	I0127 03:02:34.041431 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | Closing plugin on server side
	I0127 03:02:34.041481 1121411 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:34.041493 1121411 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:34.058399 1121411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 03:02:34.104645 1121411 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:34.104666 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Close
	I0127 03:02:34.104999 1121411 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:34.105025 1121411 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:34.105046 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | Closing plugin on server side
	I0127 03:02:35.194812 1121411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.542086223s)
	I0127 03:02:35.194884 1121411 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:35.194899 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Close
	I0127 03:02:35.194665 1121411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.592471736s)
	I0127 03:02:35.194995 1121411 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:35.195010 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Close
	I0127 03:02:35.197298 1121411 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:35.197320 1121411 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:35.197331 1121411 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:35.197338 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Close
	I0127 03:02:35.197484 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | Closing plugin on server side
	I0127 03:02:35.197524 1121411 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:35.197543 1121411 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:35.197551 1121411 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:35.197563 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Close
	I0127 03:02:35.197565 1121411 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:35.197575 1121411 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:35.197591 1121411 addons.go:479] Verifying addon metrics-server=true in "newest-cni-642127"
	I0127 03:02:35.197806 1121411 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:35.197821 1121411 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:35.738350 1121411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.679893698s)
	I0127 03:02:35.738414 1121411 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:35.738431 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Close
	I0127 03:02:35.738859 1121411 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:35.738880 1121411 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:35.738897 1121411 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:35.738906 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Close
	I0127 03:02:35.739194 1121411 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:35.739211 1121411 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:35.739256 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | Closing plugin on server side
	I0127 03:02:35.740543 1121411 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-642127 addons enable metrics-server
	
	I0127 03:02:35.742112 1121411 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	I0127 03:02:35.743312 1121411 addons.go:514] duration metric: took 2.592255359s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
	I0127 03:02:35.743356 1121411 start.go:246] waiting for cluster config update ...
	I0127 03:02:35.743372 1121411 start.go:255] writing updated cluster config ...
	I0127 03:02:35.743643 1121411 ssh_runner.go:195] Run: rm -f paused
	I0127 03:02:35.802583 1121411 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 03:02:35.804271 1121411 out.go:177] * Done! kubectl is now configured to use "newest-cni-642127" cluster and "default" namespace by default
	I0127 03:02:35.240046 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:35.739577 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:36.239666 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:36.396540 1119263 kubeadm.go:1113] duration metric: took 3.934543669s to wait for elevateKubeSystemPrivileges
	I0127 03:02:36.396587 1119263 kubeadm.go:394] duration metric: took 4m36.765414047s to StartCluster
	I0127 03:02:36.396612 1119263 settings.go:142] acquiring lock: {Name:mkfac79776d8549aa482287d1af528efdec15d32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:36.396700 1119263 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20316-1057178/kubeconfig
	I0127 03:02:36.399283 1119263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-1057178/kubeconfig: {Name:mke4bd9fc891569e5d6830fdf173fa5043f6c0a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:36.399607 1119263 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 03:02:36.399896 1119263 config.go:182] Loaded profile config "embed-certs-264552": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 03:02:36.399967 1119263 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 03:02:36.400065 1119263 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-264552"
	I0127 03:02:36.400097 1119263 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-264552"
	W0127 03:02:36.400111 1119263 addons.go:247] addon storage-provisioner should already be in state true
	I0127 03:02:36.400147 1119263 host.go:66] Checking if "embed-certs-264552" exists ...
	I0127 03:02:36.400364 1119263 addons.go:69] Setting default-storageclass=true in profile "embed-certs-264552"
	I0127 03:02:36.400393 1119263 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-264552"
	I0127 03:02:36.400697 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:36.400746 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:36.400860 1119263 addons.go:69] Setting dashboard=true in profile "embed-certs-264552"
	I0127 03:02:36.400889 1119263 addons.go:238] Setting addon dashboard=true in "embed-certs-264552"
	I0127 03:02:36.400891 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	W0127 03:02:36.400899 1119263 addons.go:247] addon dashboard should already be in state true
	I0127 03:02:36.400934 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:36.400962 1119263 host.go:66] Checking if "embed-certs-264552" exists ...
	I0127 03:02:36.401007 1119263 addons.go:69] Setting metrics-server=true in profile "embed-certs-264552"
	I0127 03:02:36.401034 1119263 addons.go:238] Setting addon metrics-server=true in "embed-certs-264552"
	W0127 03:02:36.401044 1119263 addons.go:247] addon metrics-server should already be in state true
	I0127 03:02:36.401078 1119263 host.go:66] Checking if "embed-certs-264552" exists ...
	I0127 03:02:36.401508 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:36.401557 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:36.401777 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:36.401824 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:36.401991 1119263 out.go:177] * Verifying Kubernetes components...
	I0127 03:02:36.403910 1119263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:02:36.422683 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32813
	I0127 03:02:36.423177 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:36.423824 1119263 main.go:141] libmachine: Using API Version  1
	I0127 03:02:36.423851 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:36.424298 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:36.424516 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetState
	I0127 03:02:36.425635 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45749
	I0127 03:02:36.425994 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40999
	I0127 03:02:36.426142 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:36.426423 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:36.426703 1119263 main.go:141] libmachine: Using API Version  1
	I0127 03:02:36.426729 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:36.427088 1119263 main.go:141] libmachine: Using API Version  1
	I0127 03:02:36.427111 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:36.427288 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:36.427869 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:36.427910 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:36.429980 1119263 addons.go:238] Setting addon default-storageclass=true in "embed-certs-264552"
	W0127 03:02:36.429999 1119263 addons.go:247] addon default-storageclass should already be in state true
	I0127 03:02:36.430029 1119263 host.go:66] Checking if "embed-certs-264552" exists ...
	I0127 03:02:36.430409 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:36.430443 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:36.430902 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:36.431582 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:36.431620 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:36.449634 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42081
	I0127 03:02:36.450301 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:36.451062 1119263 main.go:141] libmachine: Using API Version  1
	I0127 03:02:36.451085 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:36.451525 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:36.452191 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:36.452239 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:36.455086 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40299
	I0127 03:02:36.455301 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41491
	I0127 03:02:36.455535 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:36.456246 1119263 main.go:141] libmachine: Using API Version  1
	I0127 03:02:36.456264 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:36.456672 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:36.456898 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetState
	I0127 03:02:36.458545 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37933
	I0127 03:02:36.459300 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:36.459602 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .DriverName
	I0127 03:02:36.460164 1119263 main.go:141] libmachine: Using API Version  1
	I0127 03:02:36.460195 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:36.461041 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:36.461379 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:36.461672 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:36.461676 1119263 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 03:02:36.461723 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:36.461915 1119263 main.go:141] libmachine: Using API Version  1
	I0127 03:02:36.461930 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:36.462520 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:36.462923 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetState
	I0127 03:02:36.465082 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .DriverName
	I0127 03:02:36.465338 1119263 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 03:02:36.466448 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 03:02:36.466474 1119263 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 03:02:36.466495 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHHostname
	I0127 03:02:36.466570 1119263 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 03:02:36.468155 1119263 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:02:36.468187 1119263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 03:02:36.468209 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHHostname
	I0127 03:02:36.470910 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 03:02:36.471779 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:7a:0a", ip: ""} in network mk-embed-certs-264552: {Iface:virbr1 ExpiryTime:2025-01-27 03:57:49 +0000 UTC Type:0 Mac:52:54:00:89:7a:0a Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:embed-certs-264552 Clientid:01:52:54:00:89:7a:0a}
	I0127 03:02:36.471818 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined IP address 192.168.39.145 and MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 03:02:36.472039 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHPort
	I0127 03:02:36.472253 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHKeyPath
	I0127 03:02:36.472399 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHUsername
	I0127 03:02:36.472572 1119263 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/embed-certs-264552/id_rsa Username:docker}
	I0127 03:02:36.475423 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34703
	I0127 03:02:36.476153 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:36.476804 1119263 main.go:141] libmachine: Using API Version  1
	I0127 03:02:36.476823 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:36.477245 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:36.477505 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetState
	I0127 03:02:36.479472 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .DriverName
	I0127 03:02:36.481333 1119263 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 03:02:36.481739 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40689
	I0127 03:02:36.482275 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:36.482837 1119263 main.go:141] libmachine: Using API Version  1
	I0127 03:02:36.482854 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:36.482868 1119263 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 03:02:36.482887 1119263 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 03:02:36.482910 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHHostname
	I0127 03:02:36.483231 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:36.483493 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetState
	I0127 03:02:36.486181 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .DriverName
	I0127 03:02:36.486454 1119263 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 03:02:36.486475 1119263 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 03:02:36.486493 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHHostname
	I0127 03:02:36.488039 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 03:02:36.488500 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:7a:0a", ip: ""} in network mk-embed-certs-264552: {Iface:virbr1 ExpiryTime:2025-01-27 03:57:49 +0000 UTC Type:0 Mac:52:54:00:89:7a:0a Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:embed-certs-264552 Clientid:01:52:54:00:89:7a:0a}
	I0127 03:02:36.488532 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined IP address 192.168.39.145 and MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 03:02:36.488756 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHPort
	I0127 03:02:36.488966 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHKeyPath
	I0127 03:02:36.489130 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHUsername
	I0127 03:02:36.489289 1119263 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/embed-certs-264552/id_rsa Username:docker}
	I0127 03:02:36.489612 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 03:02:36.489866 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:7a:0a", ip: ""} in network mk-embed-certs-264552: {Iface:virbr1 ExpiryTime:2025-01-27 03:57:49 +0000 UTC Type:0 Mac:52:54:00:89:7a:0a Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:embed-certs-264552 Clientid:01:52:54:00:89:7a:0a}
	I0127 03:02:36.489889 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined IP address 192.168.39.145 and MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 03:02:36.490026 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHPort
	I0127 03:02:36.490149 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHKeyPath
	I0127 03:02:36.490261 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHUsername
	I0127 03:02:36.490344 1119263 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/embed-certs-264552/id_rsa Username:docker}
	I0127 03:02:36.494271 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 03:02:36.494636 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:7a:0a", ip: ""} in network mk-embed-certs-264552: {Iface:virbr1 ExpiryTime:2025-01-27 03:57:49 +0000 UTC Type:0 Mac:52:54:00:89:7a:0a Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:embed-certs-264552 Clientid:01:52:54:00:89:7a:0a}
	I0127 03:02:36.494659 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined IP address 192.168.39.145 and MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 03:02:36.495050 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHPort
	I0127 03:02:36.495292 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHKeyPath
	I0127 03:02:36.495511 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHUsername
	I0127 03:02:36.495682 1119263 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/embed-certs-264552/id_rsa Username:docker}
	I0127 03:02:36.737773 1119263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 03:02:36.826450 1119263 node_ready.go:35] waiting up to 6m0s for node "embed-certs-264552" to be "Ready" ...
	I0127 03:02:36.857580 1119263 node_ready.go:49] node "embed-certs-264552" has status "Ready":"True"
	I0127 03:02:36.857609 1119263 node_ready.go:38] duration metric: took 31.04815ms for node "embed-certs-264552" to be "Ready" ...
	I0127 03:02:36.857623 1119263 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:02:36.873458 1119263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 03:02:36.877540 1119263 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-mbkl2" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:36.957829 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 03:02:36.957866 1119263 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 03:02:37.005603 1119263 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 03:02:37.005635 1119263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 03:02:37.006377 1119263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:02:37.031565 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 03:02:37.031587 1119263 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 03:02:37.100245 1119263 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 03:02:37.100282 1119263 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 03:02:37.175281 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 03:02:37.175309 1119263 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 03:02:37.221791 1119263 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 03:02:37.221825 1119263 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 03:02:37.308268 1119263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 03:02:37.423632 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 03:02:37.423660 1119263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 03:02:37.588554 1119263 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:37.588586 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
	I0127 03:02:37.589111 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | Closing plugin on server side
	I0127 03:02:37.589130 1119263 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:37.589147 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:37.589162 1119263 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:37.589176 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
	I0127 03:02:37.589462 1119263 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:37.589483 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:37.634711 1119263 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:37.634744 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
	I0127 03:02:37.635023 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | Closing plugin on server side
	I0127 03:02:37.635065 1119263 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:37.635073 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:37.649206 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 03:02:37.649231 1119263 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 03:02:37.780671 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 03:02:37.780709 1119263 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 03:02:37.963118 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 03:02:37.963151 1119263 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 03:02:38.051717 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 03:02:38.051755 1119263 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 03:02:38.102698 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 03:02:38.102726 1119263 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 03:02:38.177754 1119263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 03:02:38.867496 1119263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.861076308s)
	I0127 03:02:38.867579 1119263 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:38.867594 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
	I0127 03:02:38.868010 1119263 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:38.868037 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:38.868054 1119263 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:38.868067 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
	I0127 03:02:38.868377 1119263 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:38.868397 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:38.923746 1119263 pod_ready.go:103] pod "coredns-668d6bf9bc-mbkl2" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:38.991645 1119263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.683326945s)
	I0127 03:02:38.991708 1119263 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:38.991728 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
	I0127 03:02:38.992116 1119263 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:38.992137 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:38.992146 1119263 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:38.992153 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
	I0127 03:02:38.992566 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | Closing plugin on server side
	I0127 03:02:38.992598 1119263 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:38.992624 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:38.992643 1119263 addons.go:479] Verifying addon metrics-server=true in "embed-certs-264552"
	I0127 03:02:39.990731 1119263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.812917797s)
	I0127 03:02:39.990802 1119263 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:39.990818 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
	I0127 03:02:39.991192 1119263 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:39.991223 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:39.991235 1119263 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:39.991246 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
	I0127 03:02:39.991554 1119263 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:39.991575 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:39.993095 1119263 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-264552 addons enable metrics-server
	
	I0127 03:02:39.994564 1119263 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0127 03:02:35.602346 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:38.100810 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:39.995898 1119263 addons.go:514] duration metric: took 3.595931069s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0127 03:02:40.888544 1119263 pod_ready.go:93] pod "coredns-668d6bf9bc-mbkl2" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:40.888568 1119263 pod_ready.go:82] duration metric: took 4.01099998s for pod "coredns-668d6bf9bc-mbkl2" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:40.888579 1119263 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-n5wn4" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:40.895910 1119263 pod_ready.go:93] pod "coredns-668d6bf9bc-n5wn4" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:40.895941 1119263 pod_ready.go:82] duration metric: took 7.354168ms for pod "coredns-668d6bf9bc-n5wn4" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:40.895955 1119263 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:40.900393 1119263 pod_ready.go:93] pod "etcd-embed-certs-264552" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:40.900415 1119263 pod_ready.go:82] duration metric: took 4.45357ms for pod "etcd-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:40.900426 1119263 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:40.908664 1119263 pod_ready.go:93] pod "kube-apiserver-embed-certs-264552" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:40.908686 1119263 pod_ready.go:82] duration metric: took 8.251039ms for pod "kube-apiserver-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:40.908697 1119263 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:42.917072 1119263 pod_ready.go:103] pod "kube-controller-manager-embed-certs-264552" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:44.927051 1119263 pod_ready.go:93] pod "kube-controller-manager-embed-certs-264552" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:44.927083 1119263 pod_ready.go:82] duration metric: took 4.01837775s for pod "kube-controller-manager-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:44.927096 1119263 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kwqqr" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:44.939727 1119263 pod_ready.go:93] pod "kube-proxy-kwqqr" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:44.939759 1119263 pod_ready.go:82] duration metric: took 12.654042ms for pod "kube-proxy-kwqqr" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:44.939772 1119263 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:44.966136 1119263 pod_ready.go:93] pod "kube-scheduler-embed-certs-264552" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:44.966165 1119263 pod_ready.go:82] duration metric: took 26.38251ms for pod "kube-scheduler-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:44.966178 1119263 pod_ready.go:39] duration metric: took 8.108541494s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:02:44.966199 1119263 api_server.go:52] waiting for apiserver process to appear ...
	I0127 03:02:44.966260 1119263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:40.598596 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:41.593185 1119269 pod_ready.go:82] duration metric: took 4m0.0010842s for pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace to be "Ready" ...
	E0127 03:02:41.593221 1119269 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 03:02:41.593251 1119269 pod_ready.go:39] duration metric: took 4m13.044846596s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:02:41.593292 1119269 kubeadm.go:597] duration metric: took 4m21.461431723s to restartPrimaryControlPlane
	W0127 03:02:41.593372 1119269 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 03:02:41.593408 1119269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 03:02:43.620030 1119269 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.026590178s)
	I0127 03:02:43.620115 1119269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 03:02:43.639142 1119269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 03:02:43.651292 1119269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 03:02:43.661667 1119269 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 03:02:43.661687 1119269 kubeadm.go:157] found existing configuration files:
	
	I0127 03:02:43.661733 1119269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0127 03:02:43.672110 1119269 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 03:02:43.672165 1119269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 03:02:43.683718 1119269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0127 03:02:43.693914 1119269 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 03:02:43.693983 1119269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 03:02:43.704250 1119269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0127 03:02:43.714202 1119269 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 03:02:43.714283 1119269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 03:02:43.724775 1119269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0127 03:02:43.734789 1119269 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 03:02:43.734857 1119269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 03:02:43.746079 1119269 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 03:02:43.925921 1119269 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 03:02:44.991380 1119263 api_server.go:72] duration metric: took 8.59171979s to wait for apiserver process to appear ...
	I0127 03:02:44.991410 1119263 api_server.go:88] waiting for apiserver healthz status ...
	I0127 03:02:44.991439 1119263 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8443/healthz ...
	I0127 03:02:44.997033 1119263 api_server.go:279] https://192.168.39.145:8443/healthz returned 200:
	ok
	I0127 03:02:44.998283 1119263 api_server.go:141] control plane version: v1.32.1
	I0127 03:02:44.998310 1119263 api_server.go:131] duration metric: took 6.891198ms to wait for apiserver health ...
	I0127 03:02:44.998321 1119263 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 03:02:45.087014 1119263 system_pods.go:59] 9 kube-system pods found
	I0127 03:02:45.087059 1119263 system_pods.go:61] "coredns-668d6bf9bc-mbkl2" [29059a1e-4228-4fbc-bf18-0de800cbb47a] Running
	I0127 03:02:45.087067 1119263 system_pods.go:61] "coredns-668d6bf9bc-n5wn4" [416b5ae4-f786-4b1e-a699-d688b967a6f4] Running
	I0127 03:02:45.087073 1119263 system_pods.go:61] "etcd-embed-certs-264552" [b2389caf-28fb-42d8-9912-8c3829f8bfd6] Running
	I0127 03:02:45.087079 1119263 system_pods.go:61] "kube-apiserver-embed-certs-264552" [0150043f-38b8-4946-84f1-0c9c7aaf7328] Running
	I0127 03:02:45.087084 1119263 system_pods.go:61] "kube-controller-manager-embed-certs-264552" [940554f4-564d-4939-a09a-0ea61e36ff6c] Running
	I0127 03:02:45.087090 1119263 system_pods.go:61] "kube-proxy-kwqqr" [85b35a19-646d-43a8-b90f-c5a5b4a93393] Running
	I0127 03:02:45.087096 1119263 system_pods.go:61] "kube-scheduler-embed-certs-264552" [4a578d9d-f487-4839-a23d-1ec267612f0d] Running
	I0127 03:02:45.087106 1119263 system_pods.go:61] "metrics-server-f79f97bbb-6dg5x" [4b9cd5d7-1055-45ea-8ac9-1a91b9246c0e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 03:02:45.087114 1119263 system_pods.go:61] "storage-provisioner" [4e4e1f9a-505b-4ed2-ad81-5543176f645a] Running
	I0127 03:02:45.087123 1119263 system_pods.go:74] duration metric: took 88.795129ms to wait for pod list to return data ...
	I0127 03:02:45.087134 1119263 default_sa.go:34] waiting for default service account to be created ...
	I0127 03:02:45.282547 1119263 default_sa.go:45] found service account: "default"
	I0127 03:02:45.282578 1119263 default_sa.go:55] duration metric: took 195.436382ms for default service account to be created ...
	I0127 03:02:45.282589 1119263 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 03:02:45.486513 1119263 system_pods.go:87] 9 kube-system pods found
	I0127 03:02:52.671028 1119269 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 03:02:52.671099 1119269 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 03:02:52.671206 1119269 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 03:02:52.671380 1119269 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 03:02:52.671539 1119269 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 03:02:52.671639 1119269 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 03:02:52.673297 1119269 out.go:235]   - Generating certificates and keys ...
	I0127 03:02:52.673383 1119269 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 03:02:52.673474 1119269 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 03:02:52.673554 1119269 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 03:02:52.673609 1119269 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 03:02:52.673670 1119269 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 03:02:52.673716 1119269 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 03:02:52.673767 1119269 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 03:02:52.673816 1119269 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 03:02:52.673876 1119269 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 03:02:52.673954 1119269 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 03:02:52.673999 1119269 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 03:02:52.674047 1119269 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 03:02:52.674108 1119269 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 03:02:52.674187 1119269 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 03:02:52.674263 1119269 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 03:02:52.674321 1119269 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 03:02:52.674367 1119269 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 03:02:52.674447 1119269 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 03:02:52.674507 1119269 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 03:02:52.675997 1119269 out.go:235]   - Booting up control plane ...
	I0127 03:02:52.676130 1119269 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 03:02:52.676280 1119269 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 03:02:52.676377 1119269 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 03:02:52.676517 1119269 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 03:02:52.676652 1119269 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 03:02:52.676719 1119269 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 03:02:52.676922 1119269 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 03:02:52.677082 1119269 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 03:02:52.677173 1119269 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001864315s
	I0127 03:02:52.677287 1119269 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 03:02:52.677368 1119269 kubeadm.go:310] [api-check] The API server is healthy after 5.001344194s
	I0127 03:02:52.677511 1119269 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 03:02:52.677653 1119269 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 03:02:52.677715 1119269 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 03:02:52.677867 1119269 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-717075 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 03:02:52.677952 1119269 kubeadm.go:310] [bootstrap-token] Using token: dptef9.zgjhm0hnxmak7ndp
	I0127 03:02:52.679531 1119269 out.go:235]   - Configuring RBAC rules ...
	I0127 03:02:52.679681 1119269 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 03:02:52.679793 1119269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 03:02:52.680000 1119269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 03:02:52.680151 1119269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 03:02:52.680307 1119269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 03:02:52.680415 1119269 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 03:02:52.680548 1119269 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 03:02:52.680611 1119269 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 03:02:52.680680 1119269 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 03:02:52.680690 1119269 kubeadm.go:310] 
	I0127 03:02:52.680769 1119269 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 03:02:52.680779 1119269 kubeadm.go:310] 
	I0127 03:02:52.680875 1119269 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 03:02:52.680886 1119269 kubeadm.go:310] 
	I0127 03:02:52.680922 1119269 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 03:02:52.681024 1119269 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 03:02:52.681096 1119269 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 03:02:52.681106 1119269 kubeadm.go:310] 
	I0127 03:02:52.681192 1119269 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 03:02:52.681208 1119269 kubeadm.go:310] 
	I0127 03:02:52.681275 1119269 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 03:02:52.681289 1119269 kubeadm.go:310] 
	I0127 03:02:52.681363 1119269 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 03:02:52.681491 1119269 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 03:02:52.681562 1119269 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 03:02:52.681568 1119269 kubeadm.go:310] 
	I0127 03:02:52.681636 1119269 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 03:02:52.681749 1119269 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 03:02:52.681759 1119269 kubeadm.go:310] 
	I0127 03:02:52.681896 1119269 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token dptef9.zgjhm0hnxmak7ndp \
	I0127 03:02:52.682053 1119269 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0bba8d4e4b3afb129d2d18e2e045cd48b3419c300ae73ce15b73c31a6c21b1ba \
	I0127 03:02:52.682085 1119269 kubeadm.go:310] 	--control-plane 
	I0127 03:02:52.682091 1119269 kubeadm.go:310] 
	I0127 03:02:52.682242 1119269 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 03:02:52.682259 1119269 kubeadm.go:310] 
	I0127 03:02:52.682381 1119269 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token dptef9.zgjhm0hnxmak7ndp \
	I0127 03:02:52.682532 1119269 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0bba8d4e4b3afb129d2d18e2e045cd48b3419c300ae73ce15b73c31a6c21b1ba 
	I0127 03:02:52.682561 1119269 cni.go:84] Creating CNI manager for ""
	I0127 03:02:52.682574 1119269 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 03:02:52.684226 1119269 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 03:02:52.685352 1119269 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 03:02:52.697398 1119269 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 03:02:52.719046 1119269 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 03:02:52.719104 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:52.719145 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-717075 minikube.k8s.io/updated_at=2025_01_27T03_02_52_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95 minikube.k8s.io/name=default-k8s-diff-port-717075 minikube.k8s.io/primary=true
	I0127 03:02:52.761799 1119269 ops.go:34] apiserver oom_adj: -16
	I0127 03:02:52.952929 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:53.453841 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:53.953656 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:54.453137 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:54.953750 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:55.453823 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:55.953104 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:56.453840 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:56.953721 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:57.072043 1119269 kubeadm.go:1113] duration metric: took 4.352992678s to wait for elevateKubeSystemPrivileges
	I0127 03:02:57.072116 1119269 kubeadm.go:394] duration metric: took 4m37.021077009s to StartCluster
	I0127 03:02:57.072145 1119269 settings.go:142] acquiring lock: {Name:mkfac79776d8549aa482287d1af528efdec15d32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:57.072271 1119269 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20316-1057178/kubeconfig
	I0127 03:02:57.073904 1119269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-1057178/kubeconfig: {Name:mke4bd9fc891569e5d6830fdf173fa5043f6c0a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:57.074254 1119269 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.17 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 03:02:57.074373 1119269 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 03:02:57.074508 1119269 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-717075"
	I0127 03:02:57.074520 1119269 config.go:182] Loaded profile config "default-k8s-diff-port-717075": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 03:02:57.074535 1119269 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-717075"
	W0127 03:02:57.074544 1119269 addons.go:247] addon storage-provisioner should already be in state true
	I0127 03:02:57.074540 1119269 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-717075"
	I0127 03:02:57.074579 1119269 host.go:66] Checking if "default-k8s-diff-port-717075" exists ...
	I0127 03:02:57.074576 1119269 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-717075"
	I0127 03:02:57.074572 1119269 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-717075"
	I0127 03:02:57.074588 1119269 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-717075"
	I0127 03:02:57.074605 1119269 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-717075"
	I0127 03:02:57.074614 1119269 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-717075"
	W0127 03:02:57.074616 1119269 addons.go:247] addon dashboard should already be in state true
	W0127 03:02:57.074623 1119269 addons.go:247] addon metrics-server should already be in state true
	I0127 03:02:57.074653 1119269 host.go:66] Checking if "default-k8s-diff-port-717075" exists ...
	I0127 03:02:57.074659 1119269 host.go:66] Checking if "default-k8s-diff-port-717075" exists ...
	I0127 03:02:57.075056 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:57.075068 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:57.075068 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:57.075121 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:57.075123 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:57.075163 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:57.075267 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:57.075353 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:57.081008 1119269 out.go:177] * Verifying Kubernetes components...
	I0127 03:02:57.082885 1119269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:02:57.094206 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34785
	I0127 03:02:57.094931 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:57.095746 1119269 main.go:141] libmachine: Using API Version  1
	I0127 03:02:57.095766 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:57.095843 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37149
	I0127 03:02:57.095963 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40171
	I0127 03:02:57.096377 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:57.096485 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:57.096649 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:57.097010 1119269 main.go:141] libmachine: Using API Version  1
	I0127 03:02:57.097039 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:57.097172 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:57.097228 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:57.097627 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:57.097906 1119269 main.go:141] libmachine: Using API Version  1
	I0127 03:02:57.097919 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:57.098237 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:57.098286 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:57.098455 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40437
	I0127 03:02:57.098935 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:57.099556 1119269 main.go:141] libmachine: Using API Version  1
	I0127 03:02:57.099578 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:57.099797 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:57.100439 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:57.100480 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:57.100698 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:57.100896 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetState
	I0127 03:02:57.105155 1119269 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-717075"
	W0127 03:02:57.105188 1119269 addons.go:247] addon default-storageclass should already be in state true
	I0127 03:02:57.105221 1119269 host.go:66] Checking if "default-k8s-diff-port-717075" exists ...
	I0127 03:02:57.105609 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:57.105668 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:57.121375 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36179
	I0127 03:02:57.121658 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39597
	I0127 03:02:57.121901 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:57.122123 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:57.122486 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39513
	I0127 03:02:57.122504 1119269 main.go:141] libmachine: Using API Version  1
	I0127 03:02:57.122523 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:57.122758 1119269 main.go:141] libmachine: Using API Version  1
	I0127 03:02:57.122778 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:57.122813 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35417
	I0127 03:02:57.122851 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:57.122923 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:57.123171 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetState
	I0127 03:02:57.123241 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:57.123868 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:57.123978 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetState
	I0127 03:02:57.123990 1119269 main.go:141] libmachine: Using API Version  1
	I0127 03:02:57.124007 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:57.124368 1119269 main.go:141] libmachine: Using API Version  1
	I0127 03:02:57.124387 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:57.124452 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:57.124681 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetState
	I0127 03:02:57.124733 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:57.125300 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:57.125347 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:57.126534 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .DriverName
	I0127 03:02:57.127123 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .DriverName
	I0127 03:02:57.127415 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .DriverName
	I0127 03:02:57.128921 1119269 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 03:02:57.128930 1119269 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 03:02:57.128931 1119269 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 03:02:57.130374 1119269 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 03:02:57.130393 1119269 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:02:57.130411 1119269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 03:02:57.130431 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHHostname
	I0127 03:02:57.130395 1119269 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 03:02:57.130396 1119269 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 03:02:57.130621 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHHostname
	I0127 03:02:57.132516 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 03:02:57.132532 1119269 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 03:02:57.132547 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHHostname
	I0127 03:02:57.135860 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 03:02:57.135912 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 03:02:57.136120 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 03:02:57.136644 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:da:ad", ip: ""} in network mk-default-k8s-diff-port-717075: {Iface:virbr4 ExpiryTime:2025-01-27 03:58:09 +0000 UTC Type:0 Mac:52:54:00:22:da:ad Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:default-k8s-diff-port-717075 Clientid:01:52:54:00:22:da:ad}
	I0127 03:02:57.136669 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined IP address 192.168.72.17 and MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 03:02:57.136702 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:da:ad", ip: ""} in network mk-default-k8s-diff-port-717075: {Iface:virbr4 ExpiryTime:2025-01-27 03:58:09 +0000 UTC Type:0 Mac:52:54:00:22:da:ad Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:default-k8s-diff-port-717075 Clientid:01:52:54:00:22:da:ad}
	I0127 03:02:57.136736 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined IP address 192.168.72.17 and MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 03:02:57.136747 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHPort
	I0127 03:02:57.136809 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:da:ad", ip: ""} in network mk-default-k8s-diff-port-717075: {Iface:virbr4 ExpiryTime:2025-01-27 03:58:09 +0000 UTC Type:0 Mac:52:54:00:22:da:ad Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:default-k8s-diff-port-717075 Clientid:01:52:54:00:22:da:ad}
	I0127 03:02:57.137008 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined IP address 192.168.72.17 and MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 03:02:57.136938 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHKeyPath
	I0127 03:02:57.137108 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHPort
	I0127 03:02:57.137179 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHUsername
	I0127 03:02:57.137309 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHKeyPath
	I0127 03:02:57.137376 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHPort
	I0127 03:02:57.137403 1119269 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/default-k8s-diff-port-717075/id_rsa Username:docker}
	I0127 03:02:57.137589 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHKeyPath
	I0127 03:02:57.137621 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHUsername
	I0127 03:02:57.137794 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHUsername
	I0127 03:02:57.138008 1119269 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/default-k8s-diff-port-717075/id_rsa Username:docker}
	I0127 03:02:57.138010 1119269 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/default-k8s-diff-port-717075/id_rsa Username:docker}
	I0127 03:02:57.152787 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35355
	I0127 03:02:57.153399 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:57.153967 1119269 main.go:141] libmachine: Using API Version  1
	I0127 03:02:57.154002 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:57.154377 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:57.154584 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetState
	I0127 03:02:57.156381 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .DriverName
	I0127 03:02:57.156603 1119269 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 03:02:57.156624 1119269 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 03:02:57.156649 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHHostname
	I0127 03:02:57.159499 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 03:02:57.159944 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:da:ad", ip: ""} in network mk-default-k8s-diff-port-717075: {Iface:virbr4 ExpiryTime:2025-01-27 03:58:09 +0000 UTC Type:0 Mac:52:54:00:22:da:ad Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:default-k8s-diff-port-717075 Clientid:01:52:54:00:22:da:ad}
	I0127 03:02:57.160261 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined IP address 192.168.72.17 and MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 03:02:57.160520 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHPort
	I0127 03:02:57.160684 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHKeyPath
	I0127 03:02:57.163248 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHUsername
	I0127 03:02:57.164348 1119269 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/default-k8s-diff-port-717075/id_rsa Username:docker}
	I0127 03:02:57.378051 1119269 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 03:02:57.433542 1119269 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-717075" to be "Ready" ...
	I0127 03:02:57.474874 1119269 node_ready.go:49] node "default-k8s-diff-port-717075" has status "Ready":"True"
	I0127 03:02:57.474911 1119269 node_ready.go:38] duration metric: took 41.327465ms for node "default-k8s-diff-port-717075" to be "Ready" ...
	I0127 03:02:57.474926 1119269 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:02:57.483255 1119269 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-htglq" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:57.519688 1119269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:02:57.542506 1119269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 03:02:57.549073 1119269 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 03:02:57.549102 1119269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 03:02:57.584535 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 03:02:57.584568 1119269 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 03:02:57.655673 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 03:02:57.655711 1119269 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 03:02:57.690996 1119269 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 03:02:57.691028 1119269 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 03:02:57.822313 1119269 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 03:02:57.822349 1119269 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 03:02:57.834363 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 03:02:57.834392 1119269 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 03:02:57.911077 1119269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 03:02:58.019919 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 03:02:58.019953 1119269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 03:02:58.212111 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 03:02:58.212145 1119269 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 03:02:58.309353 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 03:02:58.309381 1119269 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 03:02:58.378582 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 03:02:58.378611 1119269 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 03:02:58.444731 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 03:02:58.444762 1119269 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 03:02:58.506703 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 03:02:58.506745 1119269 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 03:02:58.584131 1119269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 03:02:58.850852 1119269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.331110115s)
	I0127 03:02:58.850948 1119269 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:58.850973 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
	I0127 03:02:58.850970 1119269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.308397522s)
	I0127 03:02:58.851017 1119269 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:58.851054 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
	I0127 03:02:58.851306 1119269 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:58.851328 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:58.851341 1119269 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:58.851348 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
	I0127 03:02:58.851426 1119269 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:58.851444 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:58.851465 1119269 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:58.851476 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
	I0127 03:02:58.851634 1119269 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:58.851650 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:58.851693 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | Closing plugin on server side
	I0127 03:02:58.851740 1119269 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:58.851762 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | Closing plugin on server side
	I0127 03:02:58.851765 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:58.886972 1119269 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:58.887006 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
	I0127 03:02:58.887346 1119269 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:58.887369 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:59.219464 1119269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.308329693s)
	I0127 03:02:59.219531 1119269 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:59.219552 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
	I0127 03:02:59.219946 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | Closing plugin on server side
	I0127 03:02:59.220003 1119269 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:59.220024 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:59.220045 1119269 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:59.220059 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
	I0127 03:02:59.220303 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | Closing plugin on server side
	I0127 03:02:59.220340 1119269 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:59.220349 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:59.220364 1119269 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-717075"
	I0127 03:02:59.493877 1119269 pod_ready.go:93] pod "coredns-668d6bf9bc-htglq" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:59.493919 1119269 pod_ready.go:82] duration metric: took 2.010631788s for pod "coredns-668d6bf9bc-htglq" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:59.493932 1119269 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-pwz9n" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:00.135755 1119269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.551568283s)
	I0127 03:03:00.135819 1119269 main.go:141] libmachine: Making call to close driver server
	I0127 03:03:00.135831 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
	I0127 03:03:00.136153 1119269 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:03:00.136171 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:03:00.136179 1119269 main.go:141] libmachine: Making call to close driver server
	I0127 03:03:00.136187 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
	I0127 03:03:00.136181 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | Closing plugin on server side
	I0127 03:03:00.136446 1119269 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:03:00.136459 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:03:00.137984 1119269 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-717075 addons enable metrics-server
	
	I0127 03:03:00.139476 1119269 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 03:03:00.140933 1119269 addons.go:514] duration metric: took 3.06657827s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 03:03:01.501546 1119269 pod_ready.go:103] pod "coredns-668d6bf9bc-pwz9n" in "kube-system" namespace has status "Ready":"False"
	I0127 03:03:04.000116 1119269 pod_ready.go:103] pod "coredns-668d6bf9bc-pwz9n" in "kube-system" namespace has status "Ready":"False"
	I0127 03:03:05.002068 1119269 pod_ready.go:93] pod "coredns-668d6bf9bc-pwz9n" in "kube-system" namespace has status "Ready":"True"
	I0127 03:03:05.002134 1119269 pod_ready.go:82] duration metric: took 5.508188953s for pod "coredns-668d6bf9bc-pwz9n" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:05.002149 1119269 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:05.007136 1119269 pod_ready.go:93] pod "etcd-default-k8s-diff-port-717075" in "kube-system" namespace has status "Ready":"True"
	I0127 03:03:05.007163 1119269 pod_ready.go:82] duration metric: took 5.003743ms for pod "etcd-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:05.007173 1119269 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:06.013821 1119269 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-717075" in "kube-system" namespace has status "Ready":"True"
	I0127 03:03:06.013847 1119269 pod_ready.go:82] duration metric: took 1.006667196s for pod "kube-apiserver-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:06.013860 1119269 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:06.018661 1119269 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-717075" in "kube-system" namespace has status "Ready":"True"
	I0127 03:03:06.018683 1119269 pod_ready.go:82] duration metric: took 4.814763ms for pod "kube-controller-manager-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:06.018694 1119269 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nlkhv" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:06.022482 1119269 pod_ready.go:93] pod "kube-proxy-nlkhv" in "kube-system" namespace has status "Ready":"True"
	I0127 03:03:06.022500 1119269 pod_ready.go:82] duration metric: took 3.79842ms for pod "kube-proxy-nlkhv" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:06.022512 1119269 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:06.197960 1119269 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-717075" in "kube-system" namespace has status "Ready":"True"
	I0127 03:03:06.197986 1119269 pod_ready.go:82] duration metric: took 175.467759ms for pod "kube-scheduler-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:06.197995 1119269 pod_ready.go:39] duration metric: took 8.723057571s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:03:06.198012 1119269 api_server.go:52] waiting for apiserver process to appear ...
	I0127 03:03:06.198073 1119269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:03:06.215210 1119269 api_server.go:72] duration metric: took 9.140900628s to wait for apiserver process to appear ...
	I0127 03:03:06.215249 1119269 api_server.go:88] waiting for apiserver healthz status ...
	I0127 03:03:06.215273 1119269 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8444/healthz ...
	I0127 03:03:06.219951 1119269 api_server.go:279] https://192.168.72.17:8444/healthz returned 200:
	ok
	I0127 03:03:06.220901 1119269 api_server.go:141] control plane version: v1.32.1
	I0127 03:03:06.220922 1119269 api_server.go:131] duration metric: took 5.666132ms to wait for apiserver health ...
	I0127 03:03:06.220929 1119269 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 03:03:06.402128 1119269 system_pods.go:59] 9 kube-system pods found
	I0127 03:03:06.402165 1119269 system_pods.go:61] "coredns-668d6bf9bc-htglq" [2d4500a2-7bc9-4c25-af55-3c20257065c2] Running
	I0127 03:03:06.402172 1119269 system_pods.go:61] "coredns-668d6bf9bc-pwz9n" [cf6b7c7c-59eb-4901-88ba-a6e4556ddd4c] Running
	I0127 03:03:06.402177 1119269 system_pods.go:61] "etcd-default-k8s-diff-port-717075" [50fac615-6926-4023-8467-fa0c3fec39b2] Running
	I0127 03:03:06.402181 1119269 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-717075" [f86307a0-5994-4178-af8a-43613ed2bd63] Running
	I0127 03:03:06.402186 1119269 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-717075" [543f1b9a-da5a-4963-adc0-3bb2c88f2de0] Running
	I0127 03:03:06.402191 1119269 system_pods.go:61] "kube-proxy-nlkhv" [57c52d4f-937f-4fc8-98dd-9aa0531f8d17] Running
	I0127 03:03:06.402197 1119269 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-717075" [bb54f953-7c1f-4ce8-a590-7d029dcfea24] Running
	I0127 03:03:06.402205 1119269 system_pods.go:61] "metrics-server-f79f97bbb-fthnn" [fb8e4d08-fb1f-49a5-8984-44e975174502] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 03:03:06.402211 1119269 system_pods.go:61] "storage-provisioner" [0a7c6b15-4ec5-46cf-8f6e-d98c292af196] Running
	I0127 03:03:06.402225 1119269 system_pods.go:74] duration metric: took 181.288367ms to wait for pod list to return data ...
	I0127 03:03:06.402236 1119269 default_sa.go:34] waiting for default service account to be created ...
	I0127 03:03:06.598976 1119269 default_sa.go:45] found service account: "default"
	I0127 03:03:06.599007 1119269 default_sa.go:55] duration metric: took 196.76041ms for default service account to be created ...
	I0127 03:03:06.599017 1119269 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 03:03:06.802139 1119269 system_pods.go:87] 9 kube-system pods found
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	52be15a103b51       523cad1a4df73       29 seconds ago      Exited              dashboard-metrics-scraper   9                   712a724f859bb       dashboard-metrics-scraper-86c6bf9756-k2z8t
	c623878236cab       07655ddf2eebe       21 minutes ago      Running             kubernetes-dashboard        0                   33a2b97eec49d       kubernetes-dashboard-7779f9b69b-7zlvr
	c1d994b589453       6e38f40d628db       21 minutes ago      Running             storage-provisioner         0                   1a96975049b69       storage-provisioner
	d8466597996e8       c69fa2e9cbf5f       21 minutes ago      Running             coredns                     0                   b9bef54853881       coredns-668d6bf9bc-86j6q
	a0b17beaa8251       c69fa2e9cbf5f       21 minutes ago      Running             coredns                     0                   e1ea225e4e626       coredns-668d6bf9bc-fk8cw
	89845d408bed3       e29f9c7391fd9       21 minutes ago      Running             kube-proxy                  0                   85bbda280c0ca       kube-proxy-45pz6
	f8dd73f608c82       2b0d6572d062c       21 minutes ago      Running             kube-scheduler              2                   bae59ee898b44       kube-scheduler-no-preload-887091
	b8952681ec21a       a9e7e6b294baf       21 minutes ago      Running             etcd                        2                   9b5923edae55c       etcd-no-preload-887091
	062301b551bd4       019ee182b58e2       21 minutes ago      Running             kube-controller-manager     2                   ceef7cf796b46       kube-controller-manager-no-preload-887091
	786778ce9f4d3       95c0bda56fc4d       21 minutes ago      Running             kube-apiserver              2                   71b51ccde95cd       kube-apiserver-no-preload-887091
	
	
	==> containerd <==
	Jan 27 03:18:07 no-preload-887091 containerd[555]: time="2025-01-27T03:18:07.396149697Z" level=info msg="CreateContainer within sandbox \"712a724f859bbef28a8fab7b018ed3fc9cd01252e3a35c5d1f53dd383339dada\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,} returns container id \"b3ef1d0d336815f2ec058d66dd159f0f78f0e1e2f674722cc20ffdaff2d96a04\""
	Jan 27 03:18:07 no-preload-887091 containerd[555]: time="2025-01-27T03:18:07.397548944Z" level=info msg="StartContainer for \"b3ef1d0d336815f2ec058d66dd159f0f78f0e1e2f674722cc20ffdaff2d96a04\""
	Jan 27 03:18:07 no-preload-887091 containerd[555]: time="2025-01-27T03:18:07.495636288Z" level=info msg="StartContainer for \"b3ef1d0d336815f2ec058d66dd159f0f78f0e1e2f674722cc20ffdaff2d96a04\" returns successfully"
	Jan 27 03:18:07 no-preload-887091 containerd[555]: time="2025-01-27T03:18:07.542203640Z" level=info msg="shim disconnected" id=b3ef1d0d336815f2ec058d66dd159f0f78f0e1e2f674722cc20ffdaff2d96a04 namespace=k8s.io
	Jan 27 03:18:07 no-preload-887091 containerd[555]: time="2025-01-27T03:18:07.542414462Z" level=warning msg="cleaning up after shim disconnected" id=b3ef1d0d336815f2ec058d66dd159f0f78f0e1e2f674722cc20ffdaff2d96a04 namespace=k8s.io
	Jan 27 03:18:07 no-preload-887091 containerd[555]: time="2025-01-27T03:18:07.542425633Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 27 03:18:08 no-preload-887091 containerd[555]: time="2025-01-27T03:18:08.374948584Z" level=info msg="RemoveContainer for \"7afdf5ec91198c1839ee48b40244e47f8195a3771b75b64eafca838b916045db\""
	Jan 27 03:18:08 no-preload-887091 containerd[555]: time="2025-01-27T03:18:08.382659346Z" level=info msg="RemoveContainer for \"7afdf5ec91198c1839ee48b40244e47f8195a3771b75b64eafca838b916045db\" returns successfully"
	Jan 27 03:18:16 no-preload-887091 containerd[555]: time="2025-01-27T03:18:16.365269759Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 03:18:16 no-preload-887091 containerd[555]: time="2025-01-27T03:18:16.376975868Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Jan 27 03:18:16 no-preload-887091 containerd[555]: time="2025-01-27T03:18:16.378939196Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Jan 27 03:18:16 no-preload-887091 containerd[555]: time="2025-01-27T03:18:16.379035528Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 27 03:23:15 no-preload-887091 containerd[555]: time="2025-01-27T03:23:15.368068895Z" level=info msg="CreateContainer within sandbox \"712a724f859bbef28a8fab7b018ed3fc9cd01252e3a35c5d1f53dd383339dada\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,}"
	Jan 27 03:23:15 no-preload-887091 containerd[555]: time="2025-01-27T03:23:15.394110770Z" level=info msg="CreateContainer within sandbox \"712a724f859bbef28a8fab7b018ed3fc9cd01252e3a35c5d1f53dd383339dada\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,} returns container id \"52be15a103b51a639a556bc16dbc4db1a6800617b88d20de145e7d18a99acecb\""
	Jan 27 03:23:15 no-preload-887091 containerd[555]: time="2025-01-27T03:23:15.395118548Z" level=info msg="StartContainer for \"52be15a103b51a639a556bc16dbc4db1a6800617b88d20de145e7d18a99acecb\""
	Jan 27 03:23:15 no-preload-887091 containerd[555]: time="2025-01-27T03:23:15.492039113Z" level=info msg="StartContainer for \"52be15a103b51a639a556bc16dbc4db1a6800617b88d20de145e7d18a99acecb\" returns successfully"
	Jan 27 03:23:15 no-preload-887091 containerd[555]: time="2025-01-27T03:23:15.545931308Z" level=info msg="shim disconnected" id=52be15a103b51a639a556bc16dbc4db1a6800617b88d20de145e7d18a99acecb namespace=k8s.io
	Jan 27 03:23:15 no-preload-887091 containerd[555]: time="2025-01-27T03:23:15.546043278Z" level=warning msg="cleaning up after shim disconnected" id=52be15a103b51a639a556bc16dbc4db1a6800617b88d20de145e7d18a99acecb namespace=k8s.io
	Jan 27 03:23:15 no-preload-887091 containerd[555]: time="2025-01-27T03:23:15.546054640Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 27 03:23:16 no-preload-887091 containerd[555]: time="2025-01-27T03:23:16.141497054Z" level=info msg="RemoveContainer for \"b3ef1d0d336815f2ec058d66dd159f0f78f0e1e2f674722cc20ffdaff2d96a04\""
	Jan 27 03:23:16 no-preload-887091 containerd[555]: time="2025-01-27T03:23:16.148607925Z" level=info msg="RemoveContainer for \"b3ef1d0d336815f2ec058d66dd159f0f78f0e1e2f674722cc20ffdaff2d96a04\" returns successfully"
	Jan 27 03:23:19 no-preload-887091 containerd[555]: time="2025-01-27T03:23:19.365333645Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 03:23:19 no-preload-887091 containerd[555]: time="2025-01-27T03:23:19.374785620Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Jan 27 03:23:19 no-preload-887091 containerd[555]: time="2025-01-27T03:23:19.377055116Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Jan 27 03:23:19 no-preload-887091 containerd[555]: time="2025-01-27T03:23:19.377132516Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [a0b17beaa8251fabd82fb44dc88123c6eacacd5d8fd174979a3a7849a205fc81] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [d8466597996e84b368a8c1d42dd8e6e8e25d177a043d482029dde1ea6da57bc8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-887091
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-887091
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95
	                    minikube.k8s.io/name=no-preload-887091
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T03_02_12_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 03:02:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-887091
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 03:23:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 03:20:52 +0000   Mon, 27 Jan 2025 03:02:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 03:20:52 +0000   Mon, 27 Jan 2025 03:02:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 03:20:52 +0000   Mon, 27 Jan 2025 03:02:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 03:20:52 +0000   Mon, 27 Jan 2025 03:02:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.201
	  Hostname:    no-preload-887091
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b5097775ecaf41659f7fab7087aa51ad
	  System UUID:                b5097775-ecaf-4165-9f7f-ab7087aa51ad
	  Boot ID:                    b04cfcf9-a4ff-4126-923b-98e2b7343e1f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-86j6q                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 coredns-668d6bf9bc-fk8cw                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-no-preload-887091                        100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-no-preload-887091              250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-no-preload-887091     200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-45pz6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-no-preload-887091              100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-f79f97bbb-vshg4                100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-k2z8t    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-7zlvr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node no-preload-887091 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node no-preload-887091 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node no-preload-887091 status is now: NodeHasSufficientPID
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node no-preload-887091 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node no-preload-887091 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node no-preload-887091 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m                node-controller  Node no-preload-887091 event: Registered Node no-preload-887091 in Controller
	
	
	==> dmesg <==
	[  +0.053207] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041788] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.942372] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.847394] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.666860] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.235944] systemd-fstab-generator[478]: Ignoring "noauto" option for root device
	[  +0.066561] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.079419] systemd-fstab-generator[490]: Ignoring "noauto" option for root device
	[  +0.155842] systemd-fstab-generator[504]: Ignoring "noauto" option for root device
	[  +0.153234] systemd-fstab-generator[516]: Ignoring "noauto" option for root device
	[  +0.284968] systemd-fstab-generator[547]: Ignoring "noauto" option for root device
	[  +1.272885] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +2.240619] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[  +0.877564] kauditd_printk_skb: 225 callbacks suppressed
	[  +5.546711] kauditd_printk_skb: 74 callbacks suppressed
	[ +11.482872] kauditd_printk_skb: 48 callbacks suppressed
	[Jan27 03:02] systemd-fstab-generator[3087]: Ignoring "noauto" option for root device
	[  +6.582425] systemd-fstab-generator[3461]: Ignoring "noauto" option for root device
	[  +0.115221] kauditd_printk_skb: 87 callbacks suppressed
	[  +4.900139] systemd-fstab-generator[3562]: Ignoring "noauto" option for root device
	[  +0.100609] kauditd_printk_skb: 28 callbacks suppressed
	[  +8.497695] kauditd_printk_skb: 96 callbacks suppressed
	[  +5.097265] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [b8952681ec21a9a0b2eaeeb1cf22e6a83ba35d8149bc0bcc150b663e15c96e8b] <==
	{"level":"info","ts":"2025-01-27T03:02:06.713943Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f000dedbcae268ef elected leader f000dedbcae268ef at term 2"}
	{"level":"info","ts":"2025-01-27T03:02:06.718958Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T03:02:06.722079Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"f000dedbcae268ef","local-member-attributes":"{Name:no-preload-887091 ClientURLs:[https://192.168.61.201:2379]}","request-path":"/0/members/f000dedbcae268ef/attributes","cluster-id":"334af0e9e11f35f3","publish-timeout":"7s"}
	{"level":"info","ts":"2025-01-27T03:02:06.722552Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T03:02:06.723225Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T03:02:06.723394Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"334af0e9e11f35f3","local-member-id":"f000dedbcae268ef","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T03:02:06.728916Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T03:02:06.730812Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T03:02:06.723470Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-27T03:02:06.723983Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T03:02:06.728489Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T03:02:06.732162Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.201:2379"}
	{"level":"info","ts":"2025-01-27T03:02:06.737840Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-27T03:02:06.731286Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-01-27T03:02:27.106125Z","caller":"traceutil/trace.go:171","msg":"trace[1771517659] transaction","detail":"{read_only:false; response_revision:547; number_of_response:1; }","duration":"106.806683ms","start":"2025-01-27T03:02:26.998039Z","end":"2025-01-27T03:02:27.104846Z","steps":["trace[1771517659] 'process raft request'  (duration: 106.581629ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T03:02:28.117947Z","caller":"traceutil/trace.go:171","msg":"trace[396997179] transaction","detail":"{read_only:false; response_revision:549; number_of_response:1; }","duration":"113.836092ms","start":"2025-01-27T03:02:28.004090Z","end":"2025-01-27T03:02:28.117926Z","steps":["trace[396997179] 'process raft request'  (duration: 113.051193ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T03:12:06.796875Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":879}
	{"level":"info","ts":"2025-01-27T03:12:06.840173Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":879,"took":"41.136016ms","hash":416233361,"current-db-size-bytes":3092480,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":3092480,"current-db-size-in-use":"3.1 MB"}
	{"level":"info","ts":"2025-01-27T03:12:06.840426Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":416233361,"revision":879,"compact-revision":-1}
	{"level":"info","ts":"2025-01-27T03:17:06.807824Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1130}
	{"level":"info","ts":"2025-01-27T03:17:06.812599Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1130,"took":"3.999871ms","hash":3786764128,"current-db-size-bytes":3092480,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":1753088,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-27T03:17:06.812812Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3786764128,"revision":1130,"compact-revision":879}
	{"level":"info","ts":"2025-01-27T03:22:06.817514Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1381}
	{"level":"info","ts":"2025-01-27T03:22:06.823289Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1381,"took":"4.747701ms","hash":1480603789,"current-db-size-bytes":3092480,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":1798144,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-27T03:22:06.823334Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1480603789,"revision":1381,"compact-revision":1130}
	
	
	==> kernel <==
	 03:23:45 up 26 min,  0 users,  load average: 0.18, 0.22, 0.24
	Linux no-preload-887091 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [786778ce9f4d324d0b43adbaad49fef2d4cef26a7b57db69061e9a3a8fa8872e] <==
	I0127 03:20:09.513140       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 03:20:09.514265       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 03:22:08.511486       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 03:22:08.511866       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0127 03:22:09.513907       1 handler_proxy.go:99] no RequestInfo found in the context
	W0127 03:22:09.513958       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 03:22:09.514411       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0127 03:22:09.514552       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 03:22:09.516214       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 03:22:09.516526       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 03:23:09.516858       1 handler_proxy.go:99] no RequestInfo found in the context
	W0127 03:23:09.516860       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 03:23:09.517061       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0127 03:23:09.517204       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 03:23:09.518293       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 03:23:09.518298       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [062301b551bd4f61224a1535e944d5ec7e78ab64d71c01bd6d07c61175163036] <==
	E0127 03:18:45.300335       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:18:45.353548       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:19:15.307525       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:19:15.362462       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:19:45.314342       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:19:45.371696       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:20:15.322368       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:20:15.380648       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:20:45.330972       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:20:45.389628       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 03:20:52.869111       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-887091"
	E0127 03:21:15.337527       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:21:15.398190       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:21:45.344460       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:21:45.406610       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:22:15.351843       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:22:15.414656       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:22:45.359327       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:22:45.422507       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:23:15.375909       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:23:15.430465       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 03:23:16.162323       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="221.981µs"
	I0127 03:23:17.160429       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="76.869µs"
	I0127 03:23:30.395234       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="134.715µs"
	I0127 03:23:44.381085       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="128.889µs"
	
	
	==> kube-proxy [89845d408bed3c7d6dfe76f5d2117ad0973f004f9be8c7e57c0c81bfcbcc9a81] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 03:02:16.908197       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 03:02:16.922602       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.61.201"]
	E0127 03:02:16.922695       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 03:02:17.023472       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 03:02:17.023523       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 03:02:17.023550       1 server_linux.go:170] "Using iptables Proxier"
	I0127 03:02:17.026808       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 03:02:17.027195       1 server.go:497] "Version info" version="v1.32.1"
	I0127 03:02:17.027232       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 03:02:17.029073       1 config.go:199] "Starting service config controller"
	I0127 03:02:17.029144       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 03:02:17.029180       1 config.go:105] "Starting endpoint slice config controller"
	I0127 03:02:17.029185       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 03:02:17.033250       1 config.go:329] "Starting node config controller"
	I0127 03:02:17.033262       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 03:02:17.130837       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 03:02:17.130865       1 shared_informer.go:320] Caches are synced for service config
	I0127 03:02:17.136859       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f8dd73f608c8272c885aecde8660fc054bde10b8e03b7cda7706a4072124259e] <==
	W0127 03:02:08.520083       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0127 03:02:08.520593       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:08.520290       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0127 03:02:08.520693       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:08.520928       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0127 03:02:08.521700       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:09.441221       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0127 03:02:09.441392       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:09.450341       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0127 03:02:09.450419       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:09.470171       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0127 03:02:09.470449       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:09.483004       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 03:02:09.483079       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:09.532058       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 03:02:09.532140       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:09.609182       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0127 03:02:09.609254       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:09.636110       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 03:02:09.636185       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0127 03:02:09.797334       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 03:02:09.797403       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:09.861565       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0127 03:02:09.861863       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 03:02:12.802108       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 03:22:54 no-preload-887091 kubelet[3469]: E0127 03:22:54.365547    3469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-vshg4" podUID="33ae36ed-d8a4-4d60-bcd0-1becf2d490bc"
	Jan 27 03:23:03 no-preload-887091 kubelet[3469]: I0127 03:23:03.363620    3469 scope.go:117] "RemoveContainer" containerID="b3ef1d0d336815f2ec058d66dd159f0f78f0e1e2f674722cc20ffdaff2d96a04"
	Jan 27 03:23:03 no-preload-887091 kubelet[3469]: E0127 03:23:03.364611    3469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-k2z8t_kubernetes-dashboard(8bac67fc-9bda-4ec5-99f6-30df6d057894)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-k2z8t" podUID="8bac67fc-9bda-4ec5-99f6-30df6d057894"
	Jan 27 03:23:08 no-preload-887091 kubelet[3469]: E0127 03:23:08.365587    3469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-vshg4" podUID="33ae36ed-d8a4-4d60-bcd0-1becf2d490bc"
	Jan 27 03:23:11 no-preload-887091 kubelet[3469]: E0127 03:23:11.445400    3469 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 03:23:11 no-preload-887091 kubelet[3469]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 03:23:11 no-preload-887091 kubelet[3469]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 03:23:11 no-preload-887091 kubelet[3469]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 03:23:11 no-preload-887091 kubelet[3469]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 03:23:15 no-preload-887091 kubelet[3469]: I0127 03:23:15.364374    3469 scope.go:117] "RemoveContainer" containerID="b3ef1d0d336815f2ec058d66dd159f0f78f0e1e2f674722cc20ffdaff2d96a04"
	Jan 27 03:23:16 no-preload-887091 kubelet[3469]: I0127 03:23:16.138723    3469 scope.go:117] "RemoveContainer" containerID="b3ef1d0d336815f2ec058d66dd159f0f78f0e1e2f674722cc20ffdaff2d96a04"
	Jan 27 03:23:16 no-preload-887091 kubelet[3469]: I0127 03:23:16.138917    3469 scope.go:117] "RemoveContainer" containerID="52be15a103b51a639a556bc16dbc4db1a6800617b88d20de145e7d18a99acecb"
	Jan 27 03:23:16 no-preload-887091 kubelet[3469]: E0127 03:23:16.139071    3469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-k2z8t_kubernetes-dashboard(8bac67fc-9bda-4ec5-99f6-30df6d057894)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-k2z8t" podUID="8bac67fc-9bda-4ec5-99f6-30df6d057894"
	Jan 27 03:23:17 no-preload-887091 kubelet[3469]: I0127 03:23:17.143068    3469 scope.go:117] "RemoveContainer" containerID="52be15a103b51a639a556bc16dbc4db1a6800617b88d20de145e7d18a99acecb"
	Jan 27 03:23:17 no-preload-887091 kubelet[3469]: E0127 03:23:17.143227    3469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-k2z8t_kubernetes-dashboard(8bac67fc-9bda-4ec5-99f6-30df6d057894)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-k2z8t" podUID="8bac67fc-9bda-4ec5-99f6-30df6d057894"
	Jan 27 03:23:19 no-preload-887091 kubelet[3469]: E0127 03:23:19.377482    3469 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 27 03:23:19 no-preload-887091 kubelet[3469]: E0127 03:23:19.377595    3469 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 27 03:23:19 no-preload-887091 kubelet[3469]: E0127 03:23:19.378034    3469 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zhrmt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:
nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdi
n:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-f79f97bbb-vshg4_kube-system(33ae36ed-d8a4-4d60-bcd0-1becf2d490bc): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Jan 27 03:23:19 no-preload-887091 kubelet[3469]: E0127 03:23:19.379404    3469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-vshg4" podUID="33ae36ed-d8a4-4d60-bcd0-1becf2d490bc"
	Jan 27 03:23:30 no-preload-887091 kubelet[3469]: E0127 03:23:30.372526    3469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-vshg4" podUID="33ae36ed-d8a4-4d60-bcd0-1becf2d490bc"
	Jan 27 03:23:32 no-preload-887091 kubelet[3469]: I0127 03:23:32.364044    3469 scope.go:117] "RemoveContainer" containerID="52be15a103b51a639a556bc16dbc4db1a6800617b88d20de145e7d18a99acecb"
	Jan 27 03:23:32 no-preload-887091 kubelet[3469]: E0127 03:23:32.364699    3469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-k2z8t_kubernetes-dashboard(8bac67fc-9bda-4ec5-99f6-30df6d057894)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-k2z8t" podUID="8bac67fc-9bda-4ec5-99f6-30df6d057894"
	Jan 27 03:23:44 no-preload-887091 kubelet[3469]: E0127 03:23:44.367024    3469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-vshg4" podUID="33ae36ed-d8a4-4d60-bcd0-1becf2d490bc"
	Jan 27 03:23:45 no-preload-887091 kubelet[3469]: I0127 03:23:45.363452    3469 scope.go:117] "RemoveContainer" containerID="52be15a103b51a639a556bc16dbc4db1a6800617b88d20de145e7d18a99acecb"
	Jan 27 03:23:45 no-preload-887091 kubelet[3469]: E0127 03:23:45.363671    3469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-k2z8t_kubernetes-dashboard(8bac67fc-9bda-4ec5-99f6-30df6d057894)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-k2z8t" podUID="8bac67fc-9bda-4ec5-99f6-30df6d057894"
	
	
	==> kubernetes-dashboard [c623878236cab2cc3807df982c4d6fbddf7c3bf9d48f30537d07db4a6468f489] <==
	2025/01/27 03:11:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:12:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:12:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:13:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:13:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:14:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:14:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:15:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:15:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:16:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:16:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:17:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:17:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:18:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:18:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:19:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:19:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:20:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:20:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:21:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:21:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:22:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:22:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:23:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:23:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [c1d994b589453b0f758481f1aed5401b976f9d1f1cdc2ece1e8d8640802a2072] <==
	I0127 03:02:18.665351       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 03:02:18.685447       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 03:02:18.688177       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 03:02:18.734492       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"85bd6e38-3014-43f5-8832-6e12e3bf9ec7", APIVersion:"v1", ResourceVersion:"438", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-887091_0f751520-7a64-4ee8-8e99-1d594fe7dd01 became leader
	I0127 03:02:18.739328       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 03:02:18.739719       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-887091_0f751520-7a64-4ee8-8e99-1d594fe7dd01!
	I0127 03:02:18.840276       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-887091_0f751520-7a64-4ee8-8e99-1d594fe7dd01!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-887091 -n no-preload-887091
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-887091 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-vshg4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-887091 describe pod metrics-server-f79f97bbb-vshg4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-887091 describe pod metrics-server-f79f97bbb-vshg4: exit status 1 (73.556379ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-vshg4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-887091 describe pod metrics-server-f79f97bbb-vshg4: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (1588.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (1613.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-264552 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p embed-certs-264552 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: signal: killed (26m50.7803342s)

                                                
                                                
-- stdout --
	* [embed-certs-264552] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20316
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20316-1057178/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-1057178/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "embed-certs-264552" primary control-plane node in "embed-certs-264552" cluster
	* Restarting existing kvm2 VM for "embed-certs-264552" ...
	* Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-264552 addons enable metrics-server
	
	* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 02:57:29.984625 1119263 out.go:345] Setting OutFile to fd 1 ...
	I0127 02:57:29.984758 1119263 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:57:29.984772 1119263 out.go:358] Setting ErrFile to fd 2...
	I0127 02:57:29.984780 1119263 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:57:29.985031 1119263 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-1057178/.minikube/bin
	I0127 02:57:29.985640 1119263 out.go:352] Setting JSON to false
	I0127 02:57:29.986676 1119263 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":13197,"bootTime":1737933453,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 02:57:29.986797 1119263 start.go:139] virtualization: kvm guest
	I0127 02:57:29.989086 1119263 out.go:177] * [embed-certs-264552] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 02:57:29.990413 1119263 out.go:177]   - MINIKUBE_LOCATION=20316
	I0127 02:57:29.990399 1119263 notify.go:220] Checking for updates...
	I0127 02:57:29.991601 1119263 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 02:57:29.992732 1119263 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20316-1057178/kubeconfig
	I0127 02:57:29.993908 1119263 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-1057178/.minikube
	I0127 02:57:29.994980 1119263 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 02:57:29.995916 1119263 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 02:57:29.997320 1119263 config.go:182] Loaded profile config "embed-certs-264552": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 02:57:29.997920 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 02:57:29.998012 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:57:30.015334 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43153
	I0127 02:57:30.015807 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:57:30.016362 1119263 main.go:141] libmachine: Using API Version  1
	I0127 02:57:30.016385 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:57:30.016850 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:57:30.017204 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .DriverName
	I0127 02:57:30.017527 1119263 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 02:57:30.017925 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 02:57:30.017970 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:57:30.038004 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46487
	I0127 02:57:30.038572 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:57:30.039175 1119263 main.go:141] libmachine: Using API Version  1
	I0127 02:57:30.039201 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:57:30.039802 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:57:30.040040 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .DriverName
	I0127 02:57:30.077120 1119263 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 02:57:30.078291 1119263 start.go:297] selected driver: kvm2
	I0127 02:57:30.078312 1119263 start.go:901] validating driver "kvm2" against &{Name:embed-certs-264552 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-264552 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 02:57:30.078489 1119263 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 02:57:30.079193 1119263 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:57:30.079302 1119263 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20316-1057178/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 02:57:30.094448 1119263 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 02:57:30.094846 1119263 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 02:57:30.094884 1119263 cni.go:84] Creating CNI manager for ""
	I0127 02:57:30.094959 1119263 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 02:57:30.095011 1119263 start.go:340] cluster config:
	{Name:embed-certs-264552 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-264552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 02:57:30.095129 1119263 iso.go:125] acquiring lock: {Name:mkd30bc9d11f9170e89ad95ce7ba25fa6d1e04f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:57:30.096532 1119263 out.go:177] * Starting "embed-certs-264552" primary control-plane node in "embed-certs-264552" cluster
	I0127 02:57:30.097647 1119263 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 02:57:30.097684 1119263 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
	I0127 02:57:30.097708 1119263 cache.go:56] Caching tarball of preloaded images
	I0127 02:57:30.097781 1119263 preload.go:172] Found /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 02:57:30.097791 1119263 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
	I0127 02:57:30.097907 1119263 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/embed-certs-264552/config.json ...
	I0127 02:57:30.098140 1119263 start.go:360] acquireMachinesLock for embed-certs-264552: {Name:mka8dc154c517d64837d06e2f84f8bddd0b82c58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 02:57:37.798070 1119263 start.go:364] duration metric: took 7.699880972s to acquireMachinesLock for "embed-certs-264552"
	I0127 02:57:37.798157 1119263 start.go:96] Skipping create...Using existing machine configuration
	I0127 02:57:37.798170 1119263 fix.go:54] fixHost starting: 
	I0127 02:57:37.798658 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 02:57:37.798710 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:57:37.816163 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33347
	I0127 02:57:37.816596 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:57:37.817148 1119263 main.go:141] libmachine: Using API Version  1
	I0127 02:57:37.817174 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:57:37.817499 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:57:37.817725 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .DriverName
	I0127 02:57:37.817878 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetState
	I0127 02:57:37.819316 1119263 fix.go:112] recreateIfNeeded on embed-certs-264552: state=Stopped err=<nil>
	I0127 02:57:37.819340 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .DriverName
	W0127 02:57:37.819484 1119263 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 02:57:37.821788 1119263 out.go:177] * Restarting existing kvm2 VM for "embed-certs-264552" ...
	I0127 02:57:37.823046 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Start
	I0127 02:57:37.823228 1119263 main.go:141] libmachine: (embed-certs-264552) starting domain...
	I0127 02:57:37.823250 1119263 main.go:141] libmachine: (embed-certs-264552) ensuring networks are active...
	I0127 02:57:37.823916 1119263 main.go:141] libmachine: (embed-certs-264552) Ensuring network default is active
	I0127 02:57:37.824244 1119263 main.go:141] libmachine: (embed-certs-264552) Ensuring network mk-embed-certs-264552 is active
	I0127 02:57:37.824663 1119263 main.go:141] libmachine: (embed-certs-264552) getting domain XML...
	I0127 02:57:37.825496 1119263 main.go:141] libmachine: (embed-certs-264552) creating domain...
	I0127 02:57:39.091246 1119263 main.go:141] libmachine: (embed-certs-264552) waiting for IP...
	I0127 02:57:39.092126 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:39.092596 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | unable to find current IP address of domain embed-certs-264552 in network mk-embed-certs-264552
	I0127 02:57:39.092735 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | I0127 02:57:39.092611 1119481 retry.go:31] will retry after 203.380974ms: waiting for domain to come up
	I0127 02:57:39.298131 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:39.298698 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | unable to find current IP address of domain embed-certs-264552 in network mk-embed-certs-264552
	I0127 02:57:39.298736 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | I0127 02:57:39.298642 1119481 retry.go:31] will retry after 310.211273ms: waiting for domain to come up
	I0127 02:57:39.610033 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:39.610522 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | unable to find current IP address of domain embed-certs-264552 in network mk-embed-certs-264552
	I0127 02:57:39.610557 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | I0127 02:57:39.610486 1119481 retry.go:31] will retry after 438.991491ms: waiting for domain to come up
	I0127 02:57:40.051045 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:40.051557 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | unable to find current IP address of domain embed-certs-264552 in network mk-embed-certs-264552
	I0127 02:57:40.051584 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | I0127 02:57:40.051529 1119481 retry.go:31] will retry after 380.61014ms: waiting for domain to come up
	I0127 02:57:40.434087 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:40.434621 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | unable to find current IP address of domain embed-certs-264552 in network mk-embed-certs-264552
	I0127 02:57:40.434657 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | I0127 02:57:40.434571 1119481 retry.go:31] will retry after 694.829884ms: waiting for domain to come up
	I0127 02:57:41.131112 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:41.131574 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | unable to find current IP address of domain embed-certs-264552 in network mk-embed-certs-264552
	I0127 02:57:41.131604 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | I0127 02:57:41.131545 1119481 retry.go:31] will retry after 891.096029ms: waiting for domain to come up
	I0127 02:57:42.024516 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:42.025007 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | unable to find current IP address of domain embed-certs-264552 in network mk-embed-certs-264552
	I0127 02:57:42.025051 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | I0127 02:57:42.024997 1119481 retry.go:31] will retry after 1.14952124s: waiting for domain to come up
	I0127 02:57:43.175962 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:43.176491 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | unable to find current IP address of domain embed-certs-264552 in network mk-embed-certs-264552
	I0127 02:57:43.176548 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | I0127 02:57:43.176467 1119481 retry.go:31] will retry after 1.436311802s: waiting for domain to come up
	I0127 02:57:44.615014 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:44.615546 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | unable to find current IP address of domain embed-certs-264552 in network mk-embed-certs-264552
	I0127 02:57:44.615579 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | I0127 02:57:44.615521 1119481 retry.go:31] will retry after 1.64838551s: waiting for domain to come up
	I0127 02:57:46.266237 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:46.266979 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | unable to find current IP address of domain embed-certs-264552 in network mk-embed-certs-264552
	I0127 02:57:46.267008 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | I0127 02:57:46.266910 1119481 retry.go:31] will retry after 1.730082422s: waiting for domain to come up
	I0127 02:57:47.999187 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:47.999738 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | unable to find current IP address of domain embed-certs-264552 in network mk-embed-certs-264552
	I0127 02:57:47.999773 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | I0127 02:57:47.999706 1119481 retry.go:31] will retry after 2.414399315s: waiting for domain to come up
	I0127 02:57:50.415345 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:50.415761 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | unable to find current IP address of domain embed-certs-264552 in network mk-embed-certs-264552
	I0127 02:57:50.415795 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | I0127 02:57:50.415721 1119481 retry.go:31] will retry after 2.475305411s: waiting for domain to come up
	I0127 02:57:52.892606 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:52.893230 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | unable to find current IP address of domain embed-certs-264552 in network mk-embed-certs-264552
	I0127 02:57:52.893258 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | I0127 02:57:52.893177 1119481 retry.go:31] will retry after 2.856083125s: waiting for domain to come up
	I0127 02:57:55.753180 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:55.753606 1119263 main.go:141] libmachine: (embed-certs-264552) found domain IP: 192.168.39.145
	I0127 02:57:55.753629 1119263 main.go:141] libmachine: (embed-certs-264552) reserving static IP address...
	I0127 02:57:55.753644 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has current primary IP address 192.168.39.145 and MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:55.754003 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | found host DHCP lease matching {name: "embed-certs-264552", mac: "52:54:00:89:7a:0a", ip: "192.168.39.145"} in network mk-embed-certs-264552: {Iface:virbr1 ExpiryTime:2025-01-27 03:57:49 +0000 UTC Type:0 Mac:52:54:00:89:7a:0a Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:embed-certs-264552 Clientid:01:52:54:00:89:7a:0a}
	I0127 02:57:55.754031 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | skip adding static IP to network mk-embed-certs-264552 - found existing host DHCP lease matching {name: "embed-certs-264552", mac: "52:54:00:89:7a:0a", ip: "192.168.39.145"}
	I0127 02:57:55.754046 1119263 main.go:141] libmachine: (embed-certs-264552) reserved static IP address 192.168.39.145 for domain embed-certs-264552
	I0127 02:57:55.754056 1119263 main.go:141] libmachine: (embed-certs-264552) waiting for SSH...
	I0127 02:57:55.754067 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | Getting to WaitForSSH function...
	I0127 02:57:55.756194 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:55.756456 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:7a:0a", ip: ""} in network mk-embed-certs-264552: {Iface:virbr1 ExpiryTime:2025-01-27 03:57:49 +0000 UTC Type:0 Mac:52:54:00:89:7a:0a Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:embed-certs-264552 Clientid:01:52:54:00:89:7a:0a}
	I0127 02:57:55.756493 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined IP address 192.168.39.145 and MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:55.756623 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | Using SSH client type: external
	I0127 02:57:55.756643 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | Using SSH private key: /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/embed-certs-264552/id_rsa (-rw-------)
	I0127 02:57:55.756676 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.145 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/embed-certs-264552/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 02:57:55.756685 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | About to run SSH command:
	I0127 02:57:55.756693 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | exit 0
	I0127 02:57:55.880939 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | SSH cmd err, output: <nil>: 
	I0127 02:57:55.881415 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetConfigRaw
	I0127 02:57:55.882091 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetIP
	I0127 02:57:55.884531 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:55.884918 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:7a:0a", ip: ""} in network mk-embed-certs-264552: {Iface:virbr1 ExpiryTime:2025-01-27 03:57:49 +0000 UTC Type:0 Mac:52:54:00:89:7a:0a Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:embed-certs-264552 Clientid:01:52:54:00:89:7a:0a}
	I0127 02:57:55.884974 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined IP address 192.168.39.145 and MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:55.885204 1119263 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/embed-certs-264552/config.json ...
	I0127 02:57:55.885452 1119263 machine.go:93] provisionDockerMachine start ...
	I0127 02:57:55.885471 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .DriverName
	I0127 02:57:55.885675 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHHostname
	I0127 02:57:55.887864 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:55.888200 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:7a:0a", ip: ""} in network mk-embed-certs-264552: {Iface:virbr1 ExpiryTime:2025-01-27 03:57:49 +0000 UTC Type:0 Mac:52:54:00:89:7a:0a Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:embed-certs-264552 Clientid:01:52:54:00:89:7a:0a}
	I0127 02:57:55.888234 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined IP address 192.168.39.145 and MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:55.888378 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHPort
	I0127 02:57:55.888551 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHKeyPath
	I0127 02:57:55.888704 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHKeyPath
	I0127 02:57:55.888799 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHUsername
	I0127 02:57:55.888910 1119263 main.go:141] libmachine: Using SSH client type: native
	I0127 02:57:55.889154 1119263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0127 02:57:55.889168 1119263 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 02:57:55.997701 1119263 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 02:57:55.997735 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetMachineName
	I0127 02:57:55.998014 1119263 buildroot.go:166] provisioning hostname "embed-certs-264552"
	I0127 02:57:55.998035 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetMachineName
	I0127 02:57:55.998242 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHHostname
	I0127 02:57:56.001301 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:56.001692 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:7a:0a", ip: ""} in network mk-embed-certs-264552: {Iface:virbr1 ExpiryTime:2025-01-27 03:57:49 +0000 UTC Type:0 Mac:52:54:00:89:7a:0a Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:embed-certs-264552 Clientid:01:52:54:00:89:7a:0a}
	I0127 02:57:56.001729 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined IP address 192.168.39.145 and MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:56.001850 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHPort
	I0127 02:57:56.002039 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHKeyPath
	I0127 02:57:56.002232 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHKeyPath
	I0127 02:57:56.002395 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHUsername
	I0127 02:57:56.002618 1119263 main.go:141] libmachine: Using SSH client type: native
	I0127 02:57:56.002799 1119263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0127 02:57:56.002815 1119263 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-264552 && echo "embed-certs-264552" | sudo tee /etc/hostname
	I0127 02:57:56.123358 1119263 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-264552
	
	I0127 02:57:56.123388 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHHostname
	I0127 02:57:56.126218 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:56.126523 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:7a:0a", ip: ""} in network mk-embed-certs-264552: {Iface:virbr1 ExpiryTime:2025-01-27 03:57:49 +0000 UTC Type:0 Mac:52:54:00:89:7a:0a Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:embed-certs-264552 Clientid:01:52:54:00:89:7a:0a}
	I0127 02:57:56.126557 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined IP address 192.168.39.145 and MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:56.126695 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHPort
	I0127 02:57:56.126914 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHKeyPath
	I0127 02:57:56.127089 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHKeyPath
	I0127 02:57:56.127247 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHUsername
	I0127 02:57:56.127418 1119263 main.go:141] libmachine: Using SSH client type: native
	I0127 02:57:56.127609 1119263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0127 02:57:56.127635 1119263 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-264552' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-264552/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-264552' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 02:57:56.242226 1119263 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 02:57:56.242258 1119263 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20316-1057178/.minikube CaCertPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20316-1057178/.minikube}
	I0127 02:57:56.242302 1119263 buildroot.go:174] setting up certificates
	I0127 02:57:56.242314 1119263 provision.go:84] configureAuth start
	I0127 02:57:56.242324 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetMachineName
	I0127 02:57:56.242637 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetIP
	I0127 02:57:56.244992 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:56.245369 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:7a:0a", ip: ""} in network mk-embed-certs-264552: {Iface:virbr1 ExpiryTime:2025-01-27 03:57:49 +0000 UTC Type:0 Mac:52:54:00:89:7a:0a Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:embed-certs-264552 Clientid:01:52:54:00:89:7a:0a}
	I0127 02:57:56.245397 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined IP address 192.168.39.145 and MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:56.245538 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHHostname
	I0127 02:57:56.247741 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:56.248110 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:7a:0a", ip: ""} in network mk-embed-certs-264552: {Iface:virbr1 ExpiryTime:2025-01-27 03:57:49 +0000 UTC Type:0 Mac:52:54:00:89:7a:0a Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:embed-certs-264552 Clientid:01:52:54:00:89:7a:0a}
	I0127 02:57:56.248142 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined IP address 192.168.39.145 and MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:56.248254 1119263 provision.go:143] copyHostCerts
	I0127 02:57:56.248329 1119263 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.pem, removing ...
	I0127 02:57:56.248342 1119263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.pem
	I0127 02:57:56.248396 1119263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.pem (1078 bytes)
	I0127 02:57:56.248505 1119263 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-1057178/.minikube/cert.pem, removing ...
	I0127 02:57:56.248513 1119263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-1057178/.minikube/cert.pem
	I0127 02:57:56.248540 1119263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20316-1057178/.minikube/cert.pem (1123 bytes)
	I0127 02:57:56.248605 1119263 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-1057178/.minikube/key.pem, removing ...
	I0127 02:57:56.248612 1119263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-1057178/.minikube/key.pem
	I0127 02:57:56.248628 1119263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20316-1057178/.minikube/key.pem (1675 bytes)
	I0127 02:57:56.248703 1119263 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca-key.pem org=jenkins.embed-certs-264552 san=[127.0.0.1 192.168.39.145 embed-certs-264552 localhost minikube]
	I0127 02:57:56.429229 1119263 provision.go:177] copyRemoteCerts
	I0127 02:57:56.429293 1119263 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 02:57:56.429323 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHHostname
	I0127 02:57:56.432009 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:56.432351 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:7a:0a", ip: ""} in network mk-embed-certs-264552: {Iface:virbr1 ExpiryTime:2025-01-27 03:57:49 +0000 UTC Type:0 Mac:52:54:00:89:7a:0a Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:embed-certs-264552 Clientid:01:52:54:00:89:7a:0a}
	I0127 02:57:56.432391 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined IP address 192.168.39.145 and MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:56.432545 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHPort
	I0127 02:57:56.432743 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHKeyPath
	I0127 02:57:56.432915 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHUsername
	I0127 02:57:56.433066 1119263 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/embed-certs-264552/id_rsa Username:docker}
	I0127 02:57:56.519780 1119263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 02:57:56.545121 1119263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0127 02:57:56.569474 1119263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 02:57:56.593347 1119263 provision.go:87] duration metric: took 351.018717ms to configureAuth
	I0127 02:57:56.593375 1119263 buildroot.go:189] setting minikube options for container-runtime
	I0127 02:57:56.593597 1119263 config.go:182] Loaded profile config "embed-certs-264552": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 02:57:56.593613 1119263 machine.go:96] duration metric: took 708.149102ms to provisionDockerMachine
	I0127 02:57:56.593623 1119263 start.go:293] postStartSetup for "embed-certs-264552" (driver="kvm2")
	I0127 02:57:56.593635 1119263 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 02:57:56.593670 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .DriverName
	I0127 02:57:56.593983 1119263 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 02:57:56.594012 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHHostname
	I0127 02:57:56.596573 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:56.596920 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:7a:0a", ip: ""} in network mk-embed-certs-264552: {Iface:virbr1 ExpiryTime:2025-01-27 03:57:49 +0000 UTC Type:0 Mac:52:54:00:89:7a:0a Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:embed-certs-264552 Clientid:01:52:54:00:89:7a:0a}
	I0127 02:57:56.596968 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined IP address 192.168.39.145 and MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:56.597114 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHPort
	I0127 02:57:56.597322 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHKeyPath
	I0127 02:57:56.597498 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHUsername
	I0127 02:57:56.597662 1119263 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/embed-certs-264552/id_rsa Username:docker}
	I0127 02:57:56.679403 1119263 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 02:57:56.683830 1119263 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 02:57:56.683853 1119263 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-1057178/.minikube/addons for local assets ...
	I0127 02:57:56.683923 1119263 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-1057178/.minikube/files for local assets ...
	I0127 02:57:56.684068 1119263 filesync.go:149] local asset: /home/jenkins/minikube-integration/20316-1057178/.minikube/files/etc/ssl/certs/10644392.pem -> 10644392.pem in /etc/ssl/certs
	I0127 02:57:56.684187 1119263 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 02:57:56.693564 1119263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/files/etc/ssl/certs/10644392.pem --> /etc/ssl/certs/10644392.pem (1708 bytes)
	I0127 02:57:56.718180 1119263 start.go:296] duration metric: took 124.538442ms for postStartSetup
	I0127 02:57:56.718240 1119263 fix.go:56] duration metric: took 18.920069497s for fixHost
	I0127 02:57:56.718287 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHHostname
	I0127 02:57:56.721201 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:56.721603 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:7a:0a", ip: ""} in network mk-embed-certs-264552: {Iface:virbr1 ExpiryTime:2025-01-27 03:57:49 +0000 UTC Type:0 Mac:52:54:00:89:7a:0a Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:embed-certs-264552 Clientid:01:52:54:00:89:7a:0a}
	I0127 02:57:56.721633 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined IP address 192.168.39.145 and MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:56.721792 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHPort
	I0127 02:57:56.721993 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHKeyPath
	I0127 02:57:56.722134 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHKeyPath
	I0127 02:57:56.722281 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHUsername
	I0127 02:57:56.722426 1119263 main.go:141] libmachine: Using SSH client type: native
	I0127 02:57:56.722618 1119263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0127 02:57:56.722631 1119263 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 02:57:56.833877 1119263 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737946676.806648411
	
	I0127 02:57:56.833905 1119263 fix.go:216] guest clock: 1737946676.806648411
	I0127 02:57:56.833915 1119263 fix.go:229] Guest: 2025-01-27 02:57:56.806648411 +0000 UTC Remote: 2025-01-27 02:57:56.718248884 +0000 UTC m=+26.778129922 (delta=88.399527ms)
	I0127 02:57:56.833940 1119263 fix.go:200] guest clock delta is within tolerance: 88.399527ms
	I0127 02:57:56.833945 1119263 start.go:83] releasing machines lock for "embed-certs-264552", held for 19.035842393s
	I0127 02:57:56.833970 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .DriverName
	I0127 02:57:56.834274 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetIP
	I0127 02:57:56.837291 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:56.837649 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:7a:0a", ip: ""} in network mk-embed-certs-264552: {Iface:virbr1 ExpiryTime:2025-01-27 03:57:49 +0000 UTC Type:0 Mac:52:54:00:89:7a:0a Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:embed-certs-264552 Clientid:01:52:54:00:89:7a:0a}
	I0127 02:57:56.837677 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined IP address 192.168.39.145 and MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:56.837876 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .DriverName
	I0127 02:57:56.838375 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .DriverName
	I0127 02:57:56.838583 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .DriverName
	I0127 02:57:56.838707 1119263 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 02:57:56.838754 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHHostname
	I0127 02:57:56.838833 1119263 ssh_runner.go:195] Run: cat /version.json
	I0127 02:57:56.838880 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHHostname
	I0127 02:57:56.841663 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:56.841693 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:56.842027 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:7a:0a", ip: ""} in network mk-embed-certs-264552: {Iface:virbr1 ExpiryTime:2025-01-27 03:57:49 +0000 UTC Type:0 Mac:52:54:00:89:7a:0a Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:embed-certs-264552 Clientid:01:52:54:00:89:7a:0a}
	I0127 02:57:56.842053 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined IP address 192.168.39.145 and MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:56.842082 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:7a:0a", ip: ""} in network mk-embed-certs-264552: {Iface:virbr1 ExpiryTime:2025-01-27 03:57:49 +0000 UTC Type:0 Mac:52:54:00:89:7a:0a Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:embed-certs-264552 Clientid:01:52:54:00:89:7a:0a}
	I0127 02:57:56.842105 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined IP address 192.168.39.145 and MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:56.842229 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHPort
	I0127 02:57:56.842344 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHPort
	I0127 02:57:56.842423 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHKeyPath
	I0127 02:57:56.842498 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHKeyPath
	I0127 02:57:56.842565 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHUsername
	I0127 02:57:56.842583 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHUsername
	I0127 02:57:56.842700 1119263 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/embed-certs-264552/id_rsa Username:docker}
	I0127 02:57:56.842754 1119263 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/embed-certs-264552/id_rsa Username:docker}
	I0127 02:57:56.923499 1119263 ssh_runner.go:195] Run: systemctl --version
	I0127 02:57:56.945729 1119263 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 02:57:56.952141 1119263 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 02:57:56.952235 1119263 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 02:57:56.969356 1119263 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 02:57:56.969381 1119263 start.go:495] detecting cgroup driver to use...
	I0127 02:57:56.969457 1119263 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 02:57:56.996584 1119263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 02:57:57.013328 1119263 docker.go:217] disabling cri-docker service (if available) ...
	I0127 02:57:57.013398 1119263 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 02:57:57.029358 1119263 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 02:57:57.043723 1119263 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 02:57:57.158500 1119263 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 02:57:57.331709 1119263 docker.go:233] disabling docker service ...
	I0127 02:57:57.331816 1119263 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 02:57:57.346031 1119263 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 02:57:57.359686 1119263 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 02:57:57.478479 1119263 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 02:57:57.605789 1119263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 02:57:57.621376 1119263 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 02:57:57.640167 1119263 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 02:57:57.650575 1119263 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 02:57:57.661098 1119263 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 02:57:57.661177 1119263 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 02:57:57.673053 1119263 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 02:57:57.688547 1119263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 02:57:57.703167 1119263 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 02:57:57.715075 1119263 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 02:57:57.726027 1119263 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 02:57:57.736198 1119263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 02:57:57.746579 1119263 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 02:57:57.756850 1119263 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 02:57:57.766195 1119263 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 02:57:57.766281 1119263 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 02:57:57.778928 1119263 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 02:57:57.789061 1119263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 02:57:57.925671 1119263 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 02:57:57.956460 1119263 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0127 02:57:57.956621 1119263 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 02:57:57.961948 1119263 retry.go:31] will retry after 591.535201ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0127 02:57:58.554530 1119263 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 02:57:58.561786 1119263 start.go:563] Will wait 60s for crictl version
	I0127 02:57:58.561866 1119263 ssh_runner.go:195] Run: which crictl
	I0127 02:57:58.567542 1119263 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 02:57:58.616303 1119263 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0127 02:57:58.616388 1119263 ssh_runner.go:195] Run: containerd --version
	I0127 02:57:58.645409 1119263 ssh_runner.go:195] Run: containerd --version
	I0127 02:57:58.676420 1119263 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	I0127 02:57:58.677615 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetIP
	I0127 02:57:58.680496 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:58.680989 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:7a:0a", ip: ""} in network mk-embed-certs-264552: {Iface:virbr1 ExpiryTime:2025-01-27 03:57:49 +0000 UTC Type:0 Mac:52:54:00:89:7a:0a Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:embed-certs-264552 Clientid:01:52:54:00:89:7a:0a}
	I0127 02:57:58.681014 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined IP address 192.168.39.145 and MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 02:57:58.681221 1119263 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0127 02:57:58.685690 1119263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 02:57:58.699107 1119263 kubeadm.go:883] updating cluster {Name:embed-certs-264552 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-264552 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 02:57:58.699264 1119263 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 02:57:58.699357 1119263 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 02:57:58.741074 1119263 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 02:57:58.741100 1119263 containerd.go:534] Images already preloaded, skipping extraction
	I0127 02:57:58.741173 1119263 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 02:57:58.779382 1119263 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 02:57:58.779408 1119263 cache_images.go:84] Images are preloaded, skipping loading
	I0127 02:57:58.779416 1119263 kubeadm.go:934] updating node { 192.168.39.145 8443 v1.32.1 containerd true true} ...
	I0127 02:57:58.779567 1119263 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-264552 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.145
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:embed-certs-264552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 02:57:58.779647 1119263 ssh_runner.go:195] Run: sudo crictl info
	I0127 02:57:58.819365 1119263 cni.go:84] Creating CNI manager for ""
	I0127 02:57:58.819395 1119263 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 02:57:58.819407 1119263 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 02:57:58.819440 1119263 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.145 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-264552 NodeName:embed-certs-264552 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.145"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.145 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 02:57:58.819623 1119263 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.145
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-264552"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.145"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.145"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 02:57:58.819701 1119263 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 02:57:58.830422 1119263 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 02:57:58.830513 1119263 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 02:57:58.840698 1119263 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0127 02:57:58.862582 1119263 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 02:57:58.882075 1119263 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2314 bytes)
	I0127 02:57:58.901009 1119263 ssh_runner.go:195] Run: grep 192.168.39.145	control-plane.minikube.internal$ /etc/hosts
	I0127 02:57:58.905512 1119263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.145	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 02:57:58.918204 1119263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 02:57:59.038210 1119263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 02:57:59.061435 1119263 certs.go:68] Setting up /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/embed-certs-264552 for IP: 192.168.39.145
	I0127 02:57:59.061460 1119263 certs.go:194] generating shared ca certs ...
	I0127 02:57:59.061490 1119263 certs.go:226] acquiring lock for ca certs: {Name:mk567acc23cbe907605c03a2ec03c8e4859e8343 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:57:59.061670 1119263 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.key
	I0127 02:57:59.061742 1119263 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/proxy-client-ca.key
	I0127 02:57:59.061761 1119263 certs.go:256] generating profile certs ...
	I0127 02:57:59.061880 1119263 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/embed-certs-264552/client.key
	I0127 02:57:59.061963 1119263 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/embed-certs-264552/apiserver.key.bdd9fddb
	I0127 02:57:59.062031 1119263 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/embed-certs-264552/proxy-client.key
	I0127 02:57:59.062181 1119263 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/1064439.pem (1338 bytes)
	W0127 02:57:59.062240 1119263 certs.go:480] ignoring /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/1064439_empty.pem, impossibly tiny 0 bytes
	I0127 02:57:59.062255 1119263 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 02:57:59.062288 1119263 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem (1078 bytes)
	I0127 02:57:59.062319 1119263 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/cert.pem (1123 bytes)
	I0127 02:57:59.062352 1119263 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/key.pem (1675 bytes)
	I0127 02:57:59.062407 1119263 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/files/etc/ssl/certs/10644392.pem (1708 bytes)
	I0127 02:57:59.063309 1119263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 02:57:59.127034 1119263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 02:57:59.157807 1119263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 02:57:59.190161 1119263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 02:57:59.242035 1119263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/embed-certs-264552/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0127 02:57:59.276362 1119263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/embed-certs-264552/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 02:57:59.313489 1119263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/embed-certs-264552/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 02:57:59.339722 1119263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/embed-certs-264552/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 02:57:59.365304 1119263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 02:57:59.392459 1119263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/1064439.pem --> /usr/share/ca-certificates/1064439.pem (1338 bytes)
	I0127 02:57:59.417808 1119263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/files/etc/ssl/certs/10644392.pem --> /usr/share/ca-certificates/10644392.pem (1708 bytes)
	I0127 02:57:59.445646 1119263 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 02:57:59.464535 1119263 ssh_runner.go:195] Run: openssl version
	I0127 02:57:59.470637 1119263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1064439.pem && ln -fs /usr/share/ca-certificates/1064439.pem /etc/ssl/certs/1064439.pem"
	I0127 02:57:59.484150 1119263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1064439.pem
	I0127 02:57:59.489360 1119263 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 01:51 /usr/share/ca-certificates/1064439.pem
	I0127 02:57:59.489436 1119263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1064439.pem
	I0127 02:57:59.496213 1119263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1064439.pem /etc/ssl/certs/51391683.0"
	I0127 02:57:59.509809 1119263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10644392.pem && ln -fs /usr/share/ca-certificates/10644392.pem /etc/ssl/certs/10644392.pem"
	I0127 02:57:59.521762 1119263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10644392.pem
	I0127 02:57:59.526615 1119263 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 01:51 /usr/share/ca-certificates/10644392.pem
	I0127 02:57:59.526680 1119263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10644392.pem
	I0127 02:57:59.533146 1119263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10644392.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 02:57:59.545772 1119263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 02:57:59.557790 1119263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:57:59.563058 1119263 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 01:43 /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:57:59.563123 1119263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:57:59.569252 1119263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 02:57:59.582045 1119263 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 02:57:59.587284 1119263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 02:57:59.594635 1119263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 02:57:59.601511 1119263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 02:57:59.608239 1119263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 02:57:59.616922 1119263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 02:57:59.625002 1119263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 02:57:59.631179 1119263 kubeadm.go:392] StartCluster: {Name:embed-certs-264552 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-264552 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 02:57:59.631274 1119263 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0127 02:57:59.631338 1119263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 02:57:59.676212 1119263 cri.go:89] found id: "195ab195a95223d2f09afc14e6cbf30602edb0b4075a0b9985451a9061670a28"
	I0127 02:57:59.676242 1119263 cri.go:89] found id: "b6652c2940c81be58c01a5bf7c341a274a7a7d6323b951831d28887c6bf88956"
	I0127 02:57:59.676248 1119263 cri.go:89] found id: "669ad31b63122dfdfd1c2a8f14d2cef1eaca44391dfcac0729f147cb003093f1"
	I0127 02:57:59.676253 1119263 cri.go:89] found id: "723d43cd8a6891e4406a27cebf39aa27b3e4359469599c2eb0fff501cebd39c6"
	I0127 02:57:59.676257 1119263 cri.go:89] found id: "5bbb7a926080d28d001ccdac3f00aa58efa2f600f8c39d512831c835d5810124"
	I0127 02:57:59.676261 1119263 cri.go:89] found id: "df7837036a7c7cc3cf87044d23736ee27e2f473eb881a303f4b3cbb6519934af"
	I0127 02:57:59.676265 1119263 cri.go:89] found id: "d615c2ee522ea5cf2d7655a4fb087057c2158868c79880d8bcb982f8e811cf1a"
	I0127 02:57:59.676269 1119263 cri.go:89] found id: ""
	I0127 02:57:59.676344 1119263 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0127 02:57:59.697049 1119263 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-27T02:57:59Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0127 02:57:59.697117 1119263 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 02:57:59.707676 1119263 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 02:57:59.707699 1119263 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 02:57:59.707752 1119263 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 02:57:59.720297 1119263 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 02:57:59.721373 1119263 kubeconfig.go:125] found "embed-certs-264552" server: "https://192.168.39.145:8443"
	I0127 02:57:59.723059 1119263 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 02:57:59.737380 1119263 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.145
	I0127 02:57:59.737420 1119263 kubeadm.go:1160] stopping kube-system containers ...
	I0127 02:57:59.737438 1119263 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0127 02:57:59.737503 1119263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 02:57:59.774069 1119263 cri.go:89] found id: "195ab195a95223d2f09afc14e6cbf30602edb0b4075a0b9985451a9061670a28"
	I0127 02:57:59.774095 1119263 cri.go:89] found id: "b6652c2940c81be58c01a5bf7c341a274a7a7d6323b951831d28887c6bf88956"
	I0127 02:57:59.774101 1119263 cri.go:89] found id: "669ad31b63122dfdfd1c2a8f14d2cef1eaca44391dfcac0729f147cb003093f1"
	I0127 02:57:59.774106 1119263 cri.go:89] found id: "723d43cd8a6891e4406a27cebf39aa27b3e4359469599c2eb0fff501cebd39c6"
	I0127 02:57:59.774111 1119263 cri.go:89] found id: "5bbb7a926080d28d001ccdac3f00aa58efa2f600f8c39d512831c835d5810124"
	I0127 02:57:59.774115 1119263 cri.go:89] found id: "df7837036a7c7cc3cf87044d23736ee27e2f473eb881a303f4b3cbb6519934af"
	I0127 02:57:59.774119 1119263 cri.go:89] found id: "d615c2ee522ea5cf2d7655a4fb087057c2158868c79880d8bcb982f8e811cf1a"
	I0127 02:57:59.774123 1119263 cri.go:89] found id: ""
	I0127 02:57:59.774130 1119263 cri.go:252] Stopping containers: [195ab195a95223d2f09afc14e6cbf30602edb0b4075a0b9985451a9061670a28 b6652c2940c81be58c01a5bf7c341a274a7a7d6323b951831d28887c6bf88956 669ad31b63122dfdfd1c2a8f14d2cef1eaca44391dfcac0729f147cb003093f1 723d43cd8a6891e4406a27cebf39aa27b3e4359469599c2eb0fff501cebd39c6 5bbb7a926080d28d001ccdac3f00aa58efa2f600f8c39d512831c835d5810124 df7837036a7c7cc3cf87044d23736ee27e2f473eb881a303f4b3cbb6519934af d615c2ee522ea5cf2d7655a4fb087057c2158868c79880d8bcb982f8e811cf1a]
	I0127 02:57:59.774191 1119263 ssh_runner.go:195] Run: which crictl
	I0127 02:57:59.778681 1119263 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 195ab195a95223d2f09afc14e6cbf30602edb0b4075a0b9985451a9061670a28 b6652c2940c81be58c01a5bf7c341a274a7a7d6323b951831d28887c6bf88956 669ad31b63122dfdfd1c2a8f14d2cef1eaca44391dfcac0729f147cb003093f1 723d43cd8a6891e4406a27cebf39aa27b3e4359469599c2eb0fff501cebd39c6 5bbb7a926080d28d001ccdac3f00aa58efa2f600f8c39d512831c835d5810124 df7837036a7c7cc3cf87044d23736ee27e2f473eb881a303f4b3cbb6519934af d615c2ee522ea5cf2d7655a4fb087057c2158868c79880d8bcb982f8e811cf1a
	I0127 02:57:59.815602 1119263 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 02:57:59.833722 1119263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 02:57:59.844114 1119263 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 02:57:59.844139 1119263 kubeadm.go:157] found existing configuration files:
	
	I0127 02:57:59.844199 1119263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 02:57:59.853504 1119263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 02:57:59.853579 1119263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 02:57:59.863537 1119263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 02:57:59.872859 1119263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 02:57:59.872927 1119263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 02:57:59.883570 1119263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 02:57:59.894030 1119263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 02:57:59.894121 1119263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 02:57:59.904662 1119263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 02:57:59.914903 1119263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 02:57:59.914973 1119263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 02:57:59.925155 1119263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 02:57:59.935912 1119263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:58:00.071953 1119263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:58:00.947988 1119263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:58:01.158474 1119263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:58:01.235836 1119263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:58:01.347969 1119263 api_server.go:52] waiting for apiserver process to appear ...
	I0127 02:58:01.348067 1119263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 02:58:01.848485 1119263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 02:58:02.348880 1119263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 02:58:02.848338 1119263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 02:58:02.866016 1119263 api_server.go:72] duration metric: took 1.518050769s to wait for apiserver process to appear ...
	I0127 02:58:02.866044 1119263 api_server.go:88] waiting for apiserver healthz status ...
	I0127 02:58:02.866063 1119263 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8443/healthz ...
	I0127 02:58:05.189767 1119263 api_server.go:279] https://192.168.39.145:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 02:58:05.189820 1119263 api_server.go:103] status: https://192.168.39.145:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 02:58:05.189840 1119263 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8443/healthz ...
	I0127 02:58:05.217618 1119263 api_server.go:279] https://192.168.39.145:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 02:58:05.217656 1119263 api_server.go:103] status: https://192.168.39.145:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 02:58:05.366998 1119263 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8443/healthz ...
	I0127 02:58:05.378932 1119263 api_server.go:279] https://192.168.39.145:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 02:58:05.378979 1119263 api_server.go:103] status: https://192.168.39.145:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 02:58:05.866613 1119263 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8443/healthz ...
	I0127 02:58:05.871890 1119263 api_server.go:279] https://192.168.39.145:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 02:58:05.871918 1119263 api_server.go:103] status: https://192.168.39.145:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 02:58:06.366583 1119263 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8443/healthz ...
	I0127 02:58:06.373032 1119263 api_server.go:279] https://192.168.39.145:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 02:58:06.373072 1119263 api_server.go:103] status: https://192.168.39.145:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 02:58:06.866741 1119263 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8443/healthz ...
	I0127 02:58:06.871623 1119263 api_server.go:279] https://192.168.39.145:8443/healthz returned 200:
	ok
	I0127 02:58:06.879248 1119263 api_server.go:141] control plane version: v1.32.1
	I0127 02:58:06.879285 1119263 api_server.go:131] duration metric: took 4.013234167s to wait for apiserver health ...
	I0127 02:58:06.879296 1119263 cni.go:84] Creating CNI manager for ""
	I0127 02:58:06.879302 1119263 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 02:58:06.881154 1119263 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 02:58:06.882342 1119263 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 02:58:06.895727 1119263 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 02:58:06.917085 1119263 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 02:58:06.928321 1119263 system_pods.go:59] 8 kube-system pods found
	I0127 02:58:06.928367 1119263 system_pods.go:61] "coredns-668d6bf9bc-8fq6h" [d58a3bf9-2451-46de-994a-22fa3fbc85b5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 02:58:06.928378 1119263 system_pods.go:61] "etcd-embed-certs-264552" [74670f27-aec2-49ec-9a89-9b82539120f3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 02:58:06.928390 1119263 system_pods.go:61] "kube-apiserver-embed-certs-264552" [81a10116-6cb5-4f32-ac1e-5eee2a58a3ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 02:58:06.928404 1119263 system_pods.go:61] "kube-controller-manager-embed-certs-264552" [91221231-967c-4646-8050-d41dfd25baff] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 02:58:06.928411 1119263 system_pods.go:61] "kube-proxy-92bz2" [ea75184c-c912-4da6-8b25-122c46e5d872] Running
	I0127 02:58:06.928421 1119263 system_pods.go:61] "kube-scheduler-embed-certs-264552" [bc5556fe-4e6e-403a-90b3-80e9539ba496] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 02:58:06.928431 1119263 system_pods.go:61] "metrics-server-f79f97bbb-wkg98" [80791883-1e4f-455c-b15a-649c86d007f7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 02:58:06.928435 1119263 system_pods.go:61] "storage-provisioner" [cd1562c5-05ad-49bd-81ea-6718aee6f58f] Running
	I0127 02:58:06.928442 1119263 system_pods.go:74] duration metric: took 11.329915ms to wait for pod list to return data ...
	I0127 02:58:06.928453 1119263 node_conditions.go:102] verifying NodePressure condition ...
	I0127 02:58:06.933422 1119263 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 02:58:06.933460 1119263 node_conditions.go:123] node cpu capacity is 2
	I0127 02:58:06.933474 1119263 node_conditions.go:105] duration metric: took 5.015513ms to run NodePressure ...
	I0127 02:58:06.933513 1119263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:58:07.214054 1119263 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0127 02:58:07.220444 1119263 kubeadm.go:739] kubelet initialised
	I0127 02:58:07.220472 1119263 kubeadm.go:740] duration metric: took 6.381328ms waiting for restarted kubelet to initialise ...
	I0127 02:58:07.220486 1119263 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 02:58:07.226534 1119263 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-8fq6h" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:09.239184 1119263 pod_ready.go:103] pod "coredns-668d6bf9bc-8fq6h" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:11.733234 1119263 pod_ready.go:93] pod "coredns-668d6bf9bc-8fq6h" in "kube-system" namespace has status "Ready":"True"
	I0127 02:58:11.733260 1119263 pod_ready.go:82] duration metric: took 4.506684942s for pod "coredns-668d6bf9bc-8fq6h" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:11.733269 1119263 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:13.741294 1119263 pod_ready.go:103] pod "etcd-embed-certs-264552" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:15.743152 1119263 pod_ready.go:103] pod "etcd-embed-certs-264552" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:18.242636 1119263 pod_ready.go:103] pod "etcd-embed-certs-264552" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:19.750672 1119263 pod_ready.go:93] pod "etcd-embed-certs-264552" in "kube-system" namespace has status "Ready":"True"
	I0127 02:58:19.750714 1119263 pod_ready.go:82] duration metric: took 8.017425184s for pod "etcd-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:19.750729 1119263 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:19.763414 1119263 pod_ready.go:93] pod "kube-apiserver-embed-certs-264552" in "kube-system" namespace has status "Ready":"True"
	I0127 02:58:19.763448 1119263 pod_ready.go:82] duration metric: took 12.710041ms for pod "kube-apiserver-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:19.763473 1119263 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:19.773146 1119263 pod_ready.go:93] pod "kube-controller-manager-embed-certs-264552" in "kube-system" namespace has status "Ready":"True"
	I0127 02:58:19.773174 1119263 pod_ready.go:82] duration metric: took 9.689579ms for pod "kube-controller-manager-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:19.773188 1119263 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-92bz2" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:19.779577 1119263 pod_ready.go:93] pod "kube-proxy-92bz2" in "kube-system" namespace has status "Ready":"True"
	I0127 02:58:19.779606 1119263 pod_ready.go:82] duration metric: took 6.408918ms for pod "kube-proxy-92bz2" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:19.779621 1119263 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:19.784682 1119263 pod_ready.go:93] pod "kube-scheduler-embed-certs-264552" in "kube-system" namespace has status "Ready":"True"
	I0127 02:58:19.784704 1119263 pod_ready.go:82] duration metric: took 5.073696ms for pod "kube-scheduler-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:19.784716 1119263 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:21.792255 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:24.292424 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:26.298488 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:28.793105 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:31.293252 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:33.792202 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:36.291041 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:38.291580 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:40.293622 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:42.813571 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:45.292331 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:47.435938 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:49.793680 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:52.291202 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:54.291775 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:56.790767 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:58.791276 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:00.791904 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:03.291639 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:05.292837 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:07.793851 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:10.290537 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:12.292537 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:14.292657 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:16.791029 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:18.792327 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:20.793150 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:23.290949 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:25.291101 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:27.292768 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:29.793751 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:32.291987 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:34.792692 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:37.291482 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:39.791659 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:41.793095 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:44.292122 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:46.791180 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:48.792117 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:51.291607 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:53.292265 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:55.792535 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:57.794536 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:00.290236 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:02.292045 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:04.790435 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:06.791856 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:09.292205 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:11.791741 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:14.292608 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:16.294762 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:18.792706 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:21.290093 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:23.292208 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:25.792427 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:27.793324 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:30.291007 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:32.291874 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:34.292278 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:36.791768 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:39.292063 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:41.791042 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:43.792110 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:46.291114 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:48.292655 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:50.791303 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:52.792084 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:55.291707 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:57.292129 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:59.293091 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:01.796329 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:04.291210 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:06.292365 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:08.294300 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:10.792046 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:13.294625 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:15.791551 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:17.791735 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:19.792179 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:22.291112 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:24.292619 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:26.791370 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:28.792009 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:31.292891 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:33.292972 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:35.793326 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:38.291499 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:40.402073 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:42.792638 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:45.292851 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:47.791112 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:49.791563 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:51.792526 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:54.291474 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:56.292868 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:58.292980 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:00.791431 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:02.793532 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:05.292102 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:07.292399 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:09.792796 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:11.795142 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:14.292215 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:16.293451 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:18.793149 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:19.785753 1119263 pod_ready.go:82] duration metric: took 4m0.001003583s for pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace to be "Ready" ...
	E0127 03:02:19.785781 1119263 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 03:02:19.785801 1119263 pod_ready.go:39] duration metric: took 4m12.565302655s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:02:19.785832 1119263 kubeadm.go:597] duration metric: took 4m20.078127881s to restartPrimaryControlPlane
	W0127 03:02:19.785891 1119263 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 03:02:19.785918 1119263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 03:02:21.866190 1119263 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.080241643s)
	I0127 03:02:21.866293 1119263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 03:02:21.886667 1119263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 03:02:21.901554 1119263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 03:02:21.915270 1119263 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 03:02:21.915296 1119263 kubeadm.go:157] found existing configuration files:
	
	I0127 03:02:21.915369 1119263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 03:02:21.929169 1119263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 03:02:21.929294 1119263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 03:02:21.942913 1119263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 03:02:21.956444 1119263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 03:02:21.956522 1119263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 03:02:21.970342 1119263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 03:02:21.989145 1119263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 03:02:21.989232 1119263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 03:02:22.001913 1119263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 03:02:22.013198 1119263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 03:02:22.013270 1119263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 03:02:22.026131 1119263 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 03:02:22.226370 1119263 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 03:02:32.408333 1119263 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 03:02:32.408420 1119263 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 03:02:32.408564 1119263 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 03:02:32.408723 1119263 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 03:02:32.408850 1119263 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 03:02:32.408936 1119263 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 03:02:32.410600 1119263 out.go:235]   - Generating certificates and keys ...
	I0127 03:02:32.410694 1119263 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 03:02:32.410784 1119263 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 03:02:32.410899 1119263 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 03:02:32.410985 1119263 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 03:02:32.411061 1119263 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 03:02:32.411144 1119263 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 03:02:32.411243 1119263 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 03:02:32.411349 1119263 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 03:02:32.411474 1119263 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 03:02:32.411592 1119263 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 03:02:32.411654 1119263 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 03:02:32.411755 1119263 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 03:02:32.411823 1119263 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 03:02:32.411900 1119263 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 03:02:32.411957 1119263 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 03:02:32.412019 1119263 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 03:02:32.412077 1119263 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 03:02:32.412166 1119263 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 03:02:32.412460 1119263 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 03:02:32.415088 1119263 out.go:235]   - Booting up control plane ...
	I0127 03:02:32.415215 1119263 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 03:02:32.415349 1119263 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 03:02:32.415444 1119263 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 03:02:32.415597 1119263 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 03:02:32.415722 1119263 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 03:02:32.415772 1119263 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 03:02:32.415934 1119263 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 03:02:32.416041 1119263 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 03:02:32.416113 1119263 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001709036s
	I0127 03:02:32.416228 1119263 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 03:02:32.416326 1119263 kubeadm.go:310] [api-check] The API server is healthy after 6.003070171s
	I0127 03:02:32.416466 1119263 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 03:02:32.416619 1119263 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 03:02:32.416691 1119263 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 03:02:32.416890 1119263 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-264552 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 03:02:32.416990 1119263 kubeadm.go:310] [bootstrap-token] Using token: glfh41.djplehex31d2nmyn
	I0127 03:02:32.418322 1119263 out.go:235]   - Configuring RBAC rules ...
	I0127 03:02:32.418468 1119263 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 03:02:32.418553 1119263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 03:02:32.418749 1119263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 03:02:32.418932 1119263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 03:02:32.419089 1119263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 03:02:32.419214 1119263 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 03:02:32.419378 1119263 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 03:02:32.419436 1119263 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 03:02:32.419498 1119263 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 03:02:32.419505 1119263 kubeadm.go:310] 
	I0127 03:02:32.419581 1119263 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 03:02:32.419587 1119263 kubeadm.go:310] 
	I0127 03:02:32.419686 1119263 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 03:02:32.419696 1119263 kubeadm.go:310] 
	I0127 03:02:32.419729 1119263 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 03:02:32.419809 1119263 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 03:02:32.419880 1119263 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 03:02:32.419891 1119263 kubeadm.go:310] 
	I0127 03:02:32.419987 1119263 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 03:02:32.419998 1119263 kubeadm.go:310] 
	I0127 03:02:32.420067 1119263 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 03:02:32.420078 1119263 kubeadm.go:310] 
	I0127 03:02:32.420143 1119263 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 03:02:32.420236 1119263 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 03:02:32.420319 1119263 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 03:02:32.420330 1119263 kubeadm.go:310] 
	I0127 03:02:32.420421 1119263 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 03:02:32.420508 1119263 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 03:02:32.420519 1119263 kubeadm.go:310] 
	I0127 03:02:32.420616 1119263 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token glfh41.djplehex31d2nmyn \
	I0127 03:02:32.420750 1119263 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0bba8d4e4b3afb129d2d18e2e045cd48b3419c300ae73ce15b73c31a6c21b1ba \
	I0127 03:02:32.420781 1119263 kubeadm.go:310] 	--control-plane 
	I0127 03:02:32.420790 1119263 kubeadm.go:310] 
	I0127 03:02:32.420891 1119263 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 03:02:32.420902 1119263 kubeadm.go:310] 
	I0127 03:02:32.421036 1119263 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token glfh41.djplehex31d2nmyn \
	I0127 03:02:32.421192 1119263 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0bba8d4e4b3afb129d2d18e2e045cd48b3419c300ae73ce15b73c31a6c21b1ba 
	I0127 03:02:32.421210 1119263 cni.go:84] Creating CNI manager for ""
	I0127 03:02:32.421220 1119263 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 03:02:32.422542 1119263 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 03:02:32.423722 1119263 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 03:02:32.436568 1119263 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 03:02:32.461950 1119263 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 03:02:32.462072 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:32.462109 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-264552 minikube.k8s.io/updated_at=2025_01_27T03_02_32_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95 minikube.k8s.io/name=embed-certs-264552 minikube.k8s.io/primary=true
	I0127 03:02:32.477721 1119263 ops.go:34] apiserver oom_adj: -16
	I0127 03:02:32.739220 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:33.239786 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:33.740039 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:34.239291 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:34.740312 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:35.240046 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:35.739577 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:36.239666 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:36.396540 1119263 kubeadm.go:1113] duration metric: took 3.934543669s to wait for elevateKubeSystemPrivileges
	I0127 03:02:36.396587 1119263 kubeadm.go:394] duration metric: took 4m36.765414047s to StartCluster
	I0127 03:02:36.396612 1119263 settings.go:142] acquiring lock: {Name:mkfac79776d8549aa482287d1af528efdec15d32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:36.396700 1119263 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20316-1057178/kubeconfig
	I0127 03:02:36.399283 1119263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-1057178/kubeconfig: {Name:mke4bd9fc891569e5d6830fdf173fa5043f6c0a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:36.399607 1119263 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 03:02:36.399896 1119263 config.go:182] Loaded profile config "embed-certs-264552": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 03:02:36.399967 1119263 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 03:02:36.400065 1119263 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-264552"
	I0127 03:02:36.400097 1119263 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-264552"
	W0127 03:02:36.400111 1119263 addons.go:247] addon storage-provisioner should already be in state true
	I0127 03:02:36.400147 1119263 host.go:66] Checking if "embed-certs-264552" exists ...
	I0127 03:02:36.400364 1119263 addons.go:69] Setting default-storageclass=true in profile "embed-certs-264552"
	I0127 03:02:36.400393 1119263 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-264552"
	I0127 03:02:36.400697 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:36.400746 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:36.400860 1119263 addons.go:69] Setting dashboard=true in profile "embed-certs-264552"
	I0127 03:02:36.400889 1119263 addons.go:238] Setting addon dashboard=true in "embed-certs-264552"
	I0127 03:02:36.400891 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	W0127 03:02:36.400899 1119263 addons.go:247] addon dashboard should already be in state true
	I0127 03:02:36.400934 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:36.400962 1119263 host.go:66] Checking if "embed-certs-264552" exists ...
	I0127 03:02:36.401007 1119263 addons.go:69] Setting metrics-server=true in profile "embed-certs-264552"
	I0127 03:02:36.401034 1119263 addons.go:238] Setting addon metrics-server=true in "embed-certs-264552"
	W0127 03:02:36.401044 1119263 addons.go:247] addon metrics-server should already be in state true
	I0127 03:02:36.401078 1119263 host.go:66] Checking if "embed-certs-264552" exists ...
	I0127 03:02:36.401508 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:36.401557 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:36.401777 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:36.401824 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:36.401991 1119263 out.go:177] * Verifying Kubernetes components...
	I0127 03:02:36.403910 1119263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:02:36.422683 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32813
	I0127 03:02:36.423177 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:36.423824 1119263 main.go:141] libmachine: Using API Version  1
	I0127 03:02:36.423851 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:36.424298 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:36.424516 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetState
	I0127 03:02:36.425635 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45749
	I0127 03:02:36.425994 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40999
	I0127 03:02:36.426142 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:36.426423 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:36.426703 1119263 main.go:141] libmachine: Using API Version  1
	I0127 03:02:36.426729 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:36.427088 1119263 main.go:141] libmachine: Using API Version  1
	I0127 03:02:36.427111 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:36.427288 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:36.427869 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:36.427910 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:36.429980 1119263 addons.go:238] Setting addon default-storageclass=true in "embed-certs-264552"
	W0127 03:02:36.429999 1119263 addons.go:247] addon default-storageclass should already be in state true
	I0127 03:02:36.430029 1119263 host.go:66] Checking if "embed-certs-264552" exists ...
	I0127 03:02:36.430409 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:36.430443 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:36.430902 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:36.431582 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:36.431620 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:36.449634 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42081
	I0127 03:02:36.450301 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:36.451062 1119263 main.go:141] libmachine: Using API Version  1
	I0127 03:02:36.451085 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:36.451525 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:36.452191 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:36.452239 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:36.455086 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40299
	I0127 03:02:36.455301 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41491
	I0127 03:02:36.455535 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:36.456246 1119263 main.go:141] libmachine: Using API Version  1
	I0127 03:02:36.456264 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:36.456672 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:36.456898 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetState
	I0127 03:02:36.458545 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37933
	I0127 03:02:36.459300 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:36.459602 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .DriverName
	I0127 03:02:36.460164 1119263 main.go:141] libmachine: Using API Version  1
	I0127 03:02:36.460195 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:36.461041 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:36.461379 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:36.461672 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:36.461676 1119263 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 03:02:36.461723 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:36.461915 1119263 main.go:141] libmachine: Using API Version  1
	I0127 03:02:36.461930 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:36.462520 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:36.462923 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetState
	I0127 03:02:36.465082 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .DriverName
	I0127 03:02:36.465338 1119263 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 03:02:36.466448 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 03:02:36.466474 1119263 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 03:02:36.466495 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHHostname
	I0127 03:02:36.466570 1119263 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 03:02:36.468155 1119263 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:02:36.468187 1119263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 03:02:36.468209 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHHostname
	I0127 03:02:36.470910 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 03:02:36.471779 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:7a:0a", ip: ""} in network mk-embed-certs-264552: {Iface:virbr1 ExpiryTime:2025-01-27 03:57:49 +0000 UTC Type:0 Mac:52:54:00:89:7a:0a Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:embed-certs-264552 Clientid:01:52:54:00:89:7a:0a}
	I0127 03:02:36.471818 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined IP address 192.168.39.145 and MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 03:02:36.472039 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHPort
	I0127 03:02:36.472253 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHKeyPath
	I0127 03:02:36.472399 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHUsername
	I0127 03:02:36.472572 1119263 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/embed-certs-264552/id_rsa Username:docker}
	I0127 03:02:36.475423 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34703
	I0127 03:02:36.476153 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:36.476804 1119263 main.go:141] libmachine: Using API Version  1
	I0127 03:02:36.476823 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:36.477245 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:36.477505 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetState
	I0127 03:02:36.479472 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .DriverName
	I0127 03:02:36.481333 1119263 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 03:02:36.481739 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40689
	I0127 03:02:36.482275 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:36.482837 1119263 main.go:141] libmachine: Using API Version  1
	I0127 03:02:36.482854 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:36.482868 1119263 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 03:02:36.482887 1119263 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 03:02:36.482910 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHHostname
	I0127 03:02:36.483231 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:36.483493 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetState
	I0127 03:02:36.486181 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .DriverName
	I0127 03:02:36.486454 1119263 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 03:02:36.486475 1119263 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 03:02:36.486493 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHHostname
	I0127 03:02:36.488039 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 03:02:36.488500 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:7a:0a", ip: ""} in network mk-embed-certs-264552: {Iface:virbr1 ExpiryTime:2025-01-27 03:57:49 +0000 UTC Type:0 Mac:52:54:00:89:7a:0a Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:embed-certs-264552 Clientid:01:52:54:00:89:7a:0a}
	I0127 03:02:36.488532 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined IP address 192.168.39.145 and MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 03:02:36.488756 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHPort
	I0127 03:02:36.488966 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHKeyPath
	I0127 03:02:36.489130 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHUsername
	I0127 03:02:36.489289 1119263 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/embed-certs-264552/id_rsa Username:docker}
	I0127 03:02:36.489612 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 03:02:36.489866 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:7a:0a", ip: ""} in network mk-embed-certs-264552: {Iface:virbr1 ExpiryTime:2025-01-27 03:57:49 +0000 UTC Type:0 Mac:52:54:00:89:7a:0a Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:embed-certs-264552 Clientid:01:52:54:00:89:7a:0a}
	I0127 03:02:36.489889 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined IP address 192.168.39.145 and MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 03:02:36.490026 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHPort
	I0127 03:02:36.490149 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHKeyPath
	I0127 03:02:36.490261 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHUsername
	I0127 03:02:36.490344 1119263 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/embed-certs-264552/id_rsa Username:docker}
	I0127 03:02:36.494271 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 03:02:36.494636 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:7a:0a", ip: ""} in network mk-embed-certs-264552: {Iface:virbr1 ExpiryTime:2025-01-27 03:57:49 +0000 UTC Type:0 Mac:52:54:00:89:7a:0a Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:embed-certs-264552 Clientid:01:52:54:00:89:7a:0a}
	I0127 03:02:36.494659 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined IP address 192.168.39.145 and MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 03:02:36.495050 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHPort
	I0127 03:02:36.495292 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHKeyPath
	I0127 03:02:36.495511 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHUsername
	I0127 03:02:36.495682 1119263 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/embed-certs-264552/id_rsa Username:docker}
	I0127 03:02:36.737773 1119263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 03:02:36.826450 1119263 node_ready.go:35] waiting up to 6m0s for node "embed-certs-264552" to be "Ready" ...
	I0127 03:02:36.857580 1119263 node_ready.go:49] node "embed-certs-264552" has status "Ready":"True"
	I0127 03:02:36.857609 1119263 node_ready.go:38] duration metric: took 31.04815ms for node "embed-certs-264552" to be "Ready" ...
	I0127 03:02:36.857623 1119263 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:02:36.873458 1119263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 03:02:36.877540 1119263 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-mbkl2" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:36.957829 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 03:02:36.957866 1119263 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 03:02:37.005603 1119263 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 03:02:37.005635 1119263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 03:02:37.006377 1119263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:02:37.031565 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 03:02:37.031587 1119263 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 03:02:37.100245 1119263 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 03:02:37.100282 1119263 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 03:02:37.175281 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 03:02:37.175309 1119263 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 03:02:37.221791 1119263 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 03:02:37.221825 1119263 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 03:02:37.308268 1119263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 03:02:37.423632 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 03:02:37.423660 1119263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 03:02:37.588554 1119263 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:37.588586 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
	I0127 03:02:37.589111 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | Closing plugin on server side
	I0127 03:02:37.589130 1119263 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:37.589147 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:37.589162 1119263 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:37.589176 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
	I0127 03:02:37.589462 1119263 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:37.589483 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:37.634711 1119263 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:37.634744 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
	I0127 03:02:37.635023 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | Closing plugin on server side
	I0127 03:02:37.635065 1119263 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:37.635073 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:37.649206 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 03:02:37.649231 1119263 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 03:02:37.780671 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 03:02:37.780709 1119263 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 03:02:37.963118 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 03:02:37.963151 1119263 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 03:02:38.051717 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 03:02:38.051755 1119263 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 03:02:38.102698 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 03:02:38.102726 1119263 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 03:02:38.177754 1119263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 03:02:38.867496 1119263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.861076308s)
	I0127 03:02:38.867579 1119263 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:38.867594 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
	I0127 03:02:38.868010 1119263 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:38.868037 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:38.868054 1119263 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:38.868067 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
	I0127 03:02:38.868377 1119263 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:38.868397 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:38.923746 1119263 pod_ready.go:103] pod "coredns-668d6bf9bc-mbkl2" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:38.991645 1119263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.683326945s)
	I0127 03:02:38.991708 1119263 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:38.991728 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
	I0127 03:02:38.992116 1119263 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:38.992137 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:38.992146 1119263 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:38.992153 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
	I0127 03:02:38.992566 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | Closing plugin on server side
	I0127 03:02:38.992598 1119263 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:38.992624 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:38.992643 1119263 addons.go:479] Verifying addon metrics-server=true in "embed-certs-264552"
	I0127 03:02:39.990731 1119263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.812917797s)
	I0127 03:02:39.990802 1119263 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:39.990818 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
	I0127 03:02:39.991192 1119263 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:39.991223 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:39.991235 1119263 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:39.991246 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
	I0127 03:02:39.991554 1119263 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:39.991575 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:39.993095 1119263 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-264552 addons enable metrics-server
	
	I0127 03:02:39.994564 1119263 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0127 03:02:39.995898 1119263 addons.go:514] duration metric: took 3.595931069s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0127 03:02:40.888544 1119263 pod_ready.go:93] pod "coredns-668d6bf9bc-mbkl2" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:40.888568 1119263 pod_ready.go:82] duration metric: took 4.01099998s for pod "coredns-668d6bf9bc-mbkl2" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:40.888579 1119263 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-n5wn4" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:40.895910 1119263 pod_ready.go:93] pod "coredns-668d6bf9bc-n5wn4" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:40.895941 1119263 pod_ready.go:82] duration metric: took 7.354168ms for pod "coredns-668d6bf9bc-n5wn4" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:40.895955 1119263 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:40.900393 1119263 pod_ready.go:93] pod "etcd-embed-certs-264552" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:40.900415 1119263 pod_ready.go:82] duration metric: took 4.45357ms for pod "etcd-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:40.900426 1119263 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:40.908664 1119263 pod_ready.go:93] pod "kube-apiserver-embed-certs-264552" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:40.908686 1119263 pod_ready.go:82] duration metric: took 8.251039ms for pod "kube-apiserver-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:40.908697 1119263 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:42.917072 1119263 pod_ready.go:103] pod "kube-controller-manager-embed-certs-264552" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:44.927051 1119263 pod_ready.go:93] pod "kube-controller-manager-embed-certs-264552" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:44.927083 1119263 pod_ready.go:82] duration metric: took 4.01837775s for pod "kube-controller-manager-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:44.927096 1119263 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kwqqr" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:44.939727 1119263 pod_ready.go:93] pod "kube-proxy-kwqqr" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:44.939759 1119263 pod_ready.go:82] duration metric: took 12.654042ms for pod "kube-proxy-kwqqr" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:44.939772 1119263 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:44.966136 1119263 pod_ready.go:93] pod "kube-scheduler-embed-certs-264552" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:44.966165 1119263 pod_ready.go:82] duration metric: took 26.38251ms for pod "kube-scheduler-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:44.966178 1119263 pod_ready.go:39] duration metric: took 8.108541494s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:02:44.966199 1119263 api_server.go:52] waiting for apiserver process to appear ...
	I0127 03:02:44.966260 1119263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:44.991380 1119263 api_server.go:72] duration metric: took 8.59171979s to wait for apiserver process to appear ...
	I0127 03:02:44.991410 1119263 api_server.go:88] waiting for apiserver healthz status ...
	I0127 03:02:44.991439 1119263 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8443/healthz ...
	I0127 03:02:44.997033 1119263 api_server.go:279] https://192.168.39.145:8443/healthz returned 200:
	ok
	I0127 03:02:44.998283 1119263 api_server.go:141] control plane version: v1.32.1
	I0127 03:02:44.998310 1119263 api_server.go:131] duration metric: took 6.891198ms to wait for apiserver health ...
	I0127 03:02:44.998321 1119263 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 03:02:45.087014 1119263 system_pods.go:59] 9 kube-system pods found
	I0127 03:02:45.087059 1119263 system_pods.go:61] "coredns-668d6bf9bc-mbkl2" [29059a1e-4228-4fbc-bf18-0de800cbb47a] Running
	I0127 03:02:45.087067 1119263 system_pods.go:61] "coredns-668d6bf9bc-n5wn4" [416b5ae4-f786-4b1e-a699-d688b967a6f4] Running
	I0127 03:02:45.087073 1119263 system_pods.go:61] "etcd-embed-certs-264552" [b2389caf-28fb-42d8-9912-8c3829f8bfd6] Running
	I0127 03:02:45.087079 1119263 system_pods.go:61] "kube-apiserver-embed-certs-264552" [0150043f-38b8-4946-84f1-0c9c7aaf7328] Running
	I0127 03:02:45.087084 1119263 system_pods.go:61] "kube-controller-manager-embed-certs-264552" [940554f4-564d-4939-a09a-0ea61e36ff6c] Running
	I0127 03:02:45.087090 1119263 system_pods.go:61] "kube-proxy-kwqqr" [85b35a19-646d-43a8-b90f-c5a5b4a93393] Running
	I0127 03:02:45.087096 1119263 system_pods.go:61] "kube-scheduler-embed-certs-264552" [4a578d9d-f487-4839-a23d-1ec267612f0d] Running
	I0127 03:02:45.087106 1119263 system_pods.go:61] "metrics-server-f79f97bbb-6dg5x" [4b9cd5d7-1055-45ea-8ac9-1a91b9246c0e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 03:02:45.087114 1119263 system_pods.go:61] "storage-provisioner" [4e4e1f9a-505b-4ed2-ad81-5543176f645a] Running
	I0127 03:02:45.087123 1119263 system_pods.go:74] duration metric: took 88.795129ms to wait for pod list to return data ...
	I0127 03:02:45.087134 1119263 default_sa.go:34] waiting for default service account to be created ...
	I0127 03:02:45.282547 1119263 default_sa.go:45] found service account: "default"
	I0127 03:02:45.282578 1119263 default_sa.go:55] duration metric: took 195.436382ms for default service account to be created ...
	I0127 03:02:45.282589 1119263 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 03:02:45.486513 1119263 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p embed-certs-264552 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-264552 -n embed-certs-264552
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-264552 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-264552 logs -n 25: (1.403719493s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p old-k8s-version-760492                              | old-k8s-version-760492       | jenkins | v1.35.0 | 27 Jan 25 02:56 UTC | 27 Jan 25 02:57 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-887091                  | no-preload-887091            | jenkins | v1.35.0 | 27 Jan 25 02:57 UTC | 27 Jan 25 02:57 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-887091                                   | no-preload-887091            | jenkins | v1.35.0 | 27 Jan 25 02:57 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-264552                 | embed-certs-264552           | jenkins | v1.35.0 | 27 Jan 25 02:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-717075       | default-k8s-diff-port-717075 | jenkins | v1.35.0 | 27 Jan 25 02:57 UTC | 27 Jan 25 02:57 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-264552                                  | embed-certs-264552           | jenkins | v1.35.0 | 27 Jan 25 02:57 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-717075 | jenkins | v1.35.0 | 27 Jan 25 02:57 UTC |                     |
	|         | default-k8s-diff-port-717075                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-760492             | old-k8s-version-760492       | jenkins | v1.35.0 | 27 Jan 25 02:57 UTC | 27 Jan 25 02:57 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-760492                              | old-k8s-version-760492       | jenkins | v1.35.0 | 27 Jan 25 02:57 UTC | 27 Jan 25 03:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | old-k8s-version-760492 image                           | old-k8s-version-760492       | jenkins | v1.35.0 | 27 Jan 25 03:01 UTC | 27 Jan 25 03:01 UTC |
	|         | list --format=json                                     |                              |         |         |                     |                     |
	| pause   | -p old-k8s-version-760492                              | old-k8s-version-760492       | jenkins | v1.35.0 | 27 Jan 25 03:01 UTC | 27 Jan 25 03:01 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p old-k8s-version-760492                              | old-k8s-version-760492       | jenkins | v1.35.0 | 27 Jan 25 03:01 UTC | 27 Jan 25 03:01 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-760492                              | old-k8s-version-760492       | jenkins | v1.35.0 | 27 Jan 25 03:01 UTC | 27 Jan 25 03:01 UTC |
	| delete  | -p old-k8s-version-760492                              | old-k8s-version-760492       | jenkins | v1.35.0 | 27 Jan 25 03:01 UTC | 27 Jan 25 03:01 UTC |
	| start   | -p newest-cni-642127 --memory=2200 --alsologtostderr   | newest-cni-642127            | jenkins | v1.35.0 | 27 Jan 25 03:01 UTC | 27 Jan 25 03:01 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=containerd          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-642127             | newest-cni-642127            | jenkins | v1.35.0 | 27 Jan 25 03:01 UTC | 27 Jan 25 03:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-642127                                   | newest-cni-642127            | jenkins | v1.35.0 | 27 Jan 25 03:01 UTC | 27 Jan 25 03:01 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-642127                  | newest-cni-642127            | jenkins | v1.35.0 | 27 Jan 25 03:02 UTC | 27 Jan 25 03:02 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-642127 --memory=2200 --alsologtostderr   | newest-cni-642127            | jenkins | v1.35.0 | 27 Jan 25 03:02 UTC | 27 Jan 25 03:02 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=containerd          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-642127 image list                           | newest-cni-642127            | jenkins | v1.35.0 | 27 Jan 25 03:02 UTC | 27 Jan 25 03:02 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-642127                                   | newest-cni-642127            | jenkins | v1.35.0 | 27 Jan 25 03:02 UTC | 27 Jan 25 03:02 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-642127                                   | newest-cni-642127            | jenkins | v1.35.0 | 27 Jan 25 03:02 UTC | 27 Jan 25 03:02 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-642127                                   | newest-cni-642127            | jenkins | v1.35.0 | 27 Jan 25 03:02 UTC | 27 Jan 25 03:02 UTC |
	| delete  | -p newest-cni-642127                                   | newest-cni-642127            | jenkins | v1.35.0 | 27 Jan 25 03:02 UTC | 27 Jan 25 03:02 UTC |
	| delete  | -p no-preload-887091                                   | no-preload-887091            | jenkins | v1.35.0 | 27 Jan 25 03:23 UTC | 27 Jan 25 03:23 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 03:02:00
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 03:02:00.237835 1121411 out.go:345] Setting OutFile to fd 1 ...
	I0127 03:02:00.238128 1121411 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 03:02:00.238140 1121411 out.go:358] Setting ErrFile to fd 2...
	I0127 03:02:00.238146 1121411 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 03:02:00.238345 1121411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-1057178/.minikube/bin
	I0127 03:02:00.239045 1121411 out.go:352] Setting JSON to false
	I0127 03:02:00.240327 1121411 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":13467,"bootTime":1737933453,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 03:02:00.240474 1121411 start.go:139] virtualization: kvm guest
	I0127 03:02:00.242533 1121411 out.go:177] * [newest-cni-642127] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 03:02:00.244184 1121411 out.go:177]   - MINIKUBE_LOCATION=20316
	I0127 03:02:00.244247 1121411 notify.go:220] Checking for updates...
	I0127 03:02:00.246478 1121411 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 03:02:00.247855 1121411 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20316-1057178/kubeconfig
	I0127 03:02:00.249125 1121411 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-1057178/.minikube
	I0127 03:02:00.250346 1121411 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 03:02:00.251585 1121411 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 03:02:00.253406 1121411 config.go:182] Loaded profile config "newest-cni-642127": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 03:02:00.254032 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:00.254107 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:00.270414 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33505
	I0127 03:02:00.270862 1121411 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:00.271405 1121411 main.go:141] libmachine: Using API Version  1
	I0127 03:02:00.271428 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:00.271776 1121411 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:00.271945 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:00.272173 1121411 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 03:02:00.272461 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:00.272496 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:00.287317 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36083
	I0127 03:02:00.287836 1121411 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:00.288298 1121411 main.go:141] libmachine: Using API Version  1
	I0127 03:02:00.288340 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:00.288708 1121411 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:00.288885 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:00.325767 1121411 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 03:02:00.327047 1121411 start.go:297] selected driver: kvm2
	I0127 03:02:00.327060 1121411 start.go:901] validating driver "kvm2" against &{Name:newest-cni-642127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-642127 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenA
ddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 03:02:00.327183 1121411 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 03:02:00.327982 1121411 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 03:02:00.328064 1121411 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20316-1057178/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 03:02:00.343178 1121411 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 03:02:00.343639 1121411 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 03:02:00.343677 1121411 cni.go:84] Creating CNI manager for ""
	I0127 03:02:00.343730 1121411 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 03:02:00.343763 1121411 start.go:340] cluster config:
	{Name:newest-cni-642127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-642127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 03:02:00.343883 1121411 iso.go:125] acquiring lock: {Name:mkd30bc9d11f9170e89ad95ce7ba25fa6d1e04f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 03:02:00.345590 1121411 out.go:177] * Starting "newest-cni-642127" primary control-plane node in "newest-cni-642127" cluster
	I0127 03:02:00.346774 1121411 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 03:02:00.346814 1121411 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
	I0127 03:02:00.346828 1121411 cache.go:56] Caching tarball of preloaded images
	I0127 03:02:00.346908 1121411 preload.go:172] Found /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 03:02:00.346919 1121411 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
	I0127 03:02:00.347008 1121411 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127/config.json ...
	I0127 03:02:00.347215 1121411 start.go:360] acquireMachinesLock for newest-cni-642127: {Name:mka8dc154c517d64837d06e2f84f8bddd0b82c58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 03:02:00.347258 1121411 start.go:364] duration metric: took 23.213µs to acquireMachinesLock for "newest-cni-642127"
	I0127 03:02:00.347273 1121411 start.go:96] Skipping create...Using existing machine configuration
	I0127 03:02:00.347278 1121411 fix.go:54] fixHost starting: 
	I0127 03:02:00.347525 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:00.347569 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:00.362339 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37759
	I0127 03:02:00.362837 1121411 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:00.363413 1121411 main.go:141] libmachine: Using API Version  1
	I0127 03:02:00.363435 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:00.363738 1121411 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:00.363908 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:00.364065 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetState
	I0127 03:02:00.365643 1121411 fix.go:112] recreateIfNeeded on newest-cni-642127: state=Stopped err=<nil>
	I0127 03:02:00.365669 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	W0127 03:02:00.366076 1121411 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 03:02:00.368560 1121411 out.go:177] * Restarting existing kvm2 VM for "newest-cni-642127" ...
	I0127 03:01:59.553947 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:01.029438 1119007 pod_ready.go:82] duration metric: took 4m0.000430308s for pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace to be "Ready" ...
	E0127 03:02:01.029463 1119007 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 03:02:01.029492 1119007 pod_ready.go:39] duration metric: took 4m12.545085543s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:02:01.029521 1119007 kubeadm.go:597] duration metric: took 4m20.2724454s to restartPrimaryControlPlane
	W0127 03:02:01.029578 1119007 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 03:02:01.029603 1119007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 03:02:03.004910 1119007 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.9752757s)
	I0127 03:02:03.005026 1119007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 03:02:03.022327 1119007 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 03:02:03.033433 1119007 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 03:02:03.043716 1119007 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 03:02:03.043751 1119007 kubeadm.go:157] found existing configuration files:
	
	I0127 03:02:03.043807 1119007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 03:02:03.053848 1119007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 03:02:03.053913 1119007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 03:02:03.064618 1119007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 03:02:03.075259 1119007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 03:02:03.075327 1119007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 03:02:03.087088 1119007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 03:02:03.098909 1119007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 03:02:03.098975 1119007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 03:02:03.110053 1119007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 03:02:03.119864 1119007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 03:02:03.119938 1119007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 03:02:03.130987 1119007 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 03:02:03.185348 1119007 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 03:02:03.185417 1119007 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 03:02:03.314698 1119007 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 03:02:03.314881 1119007 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 03:02:03.315043 1119007 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 03:02:03.324401 1119007 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 03:02:03.326164 1119007 out.go:235]   - Generating certificates and keys ...
	I0127 03:02:03.326268 1119007 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 03:02:03.326359 1119007 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 03:02:03.326477 1119007 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 03:02:03.326572 1119007 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 03:02:03.326663 1119007 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 03:02:03.326738 1119007 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 03:02:03.326859 1119007 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 03:02:03.327073 1119007 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 03:02:03.327208 1119007 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 03:02:03.327338 1119007 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 03:02:03.327408 1119007 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 03:02:03.327502 1119007 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 03:02:03.521123 1119007 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 03:02:03.756848 1119007 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 03:02:03.911089 1119007 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 03:02:04.122010 1119007 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 03:02:04.383085 1119007 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 03:02:04.383614 1119007 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 03:02:04.386205 1119007 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 03:02:00.791431 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:02.793532 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:00.101750 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:02.600452 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:00.369945 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Start
	I0127 03:02:00.370121 1121411 main.go:141] libmachine: (newest-cni-642127) starting domain...
	I0127 03:02:00.370143 1121411 main.go:141] libmachine: (newest-cni-642127) ensuring networks are active...
	I0127 03:02:00.370872 1121411 main.go:141] libmachine: (newest-cni-642127) Ensuring network default is active
	I0127 03:02:00.371180 1121411 main.go:141] libmachine: (newest-cni-642127) Ensuring network mk-newest-cni-642127 is active
	I0127 03:02:00.371540 1121411 main.go:141] libmachine: (newest-cni-642127) getting domain XML...
	I0127 03:02:00.372193 1121411 main.go:141] libmachine: (newest-cni-642127) creating domain...
	I0127 03:02:01.655632 1121411 main.go:141] libmachine: (newest-cni-642127) waiting for IP...
	I0127 03:02:01.656638 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:01.657157 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:01.657251 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:01.657139 1121446 retry.go:31] will retry after 277.784658ms: waiting for domain to come up
	I0127 03:02:01.936660 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:01.937240 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:01.937271 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:01.937207 1121446 retry.go:31] will retry after 238.163617ms: waiting for domain to come up
	I0127 03:02:02.176792 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:02.177474 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:02.177544 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:02.177436 1121446 retry.go:31] will retry after 380.939356ms: waiting for domain to come up
	I0127 03:02:02.560097 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:02.560666 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:02.560700 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:02.560618 1121446 retry.go:31] will retry after 505.552982ms: waiting for domain to come up
	I0127 03:02:03.067443 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:03.067968 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:03.068040 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:03.067965 1121446 retry.go:31] will retry after 727.427105ms: waiting for domain to come up
	I0127 03:02:03.797031 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:03.797596 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:03.797621 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:03.797562 1121446 retry.go:31] will retry after 647.611718ms: waiting for domain to come up
	I0127 03:02:04.447043 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:04.447523 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:04.447556 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:04.447508 1121446 retry.go:31] will retry after 984.747883ms: waiting for domain to come up
	I0127 03:02:04.388044 1119007 out.go:235]   - Booting up control plane ...
	I0127 03:02:04.388157 1119007 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 03:02:04.388265 1119007 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 03:02:04.388373 1119007 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 03:02:04.409379 1119007 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 03:02:04.416389 1119007 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 03:02:04.416479 1119007 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 03:02:04.571487 1119007 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 03:02:04.571690 1119007 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 03:02:05.072916 1119007 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.574288ms
	I0127 03:02:05.073090 1119007 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 03:02:05.292102 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:07.292399 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:09.792796 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:05.099225 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:07.099594 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:09.600572 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:05.434383 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:05.434961 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:05.434994 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:05.434926 1121446 retry.go:31] will retry after 1.239188819s: waiting for domain to come up
	I0127 03:02:06.675638 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:06.676209 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:06.676244 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:06.676172 1121446 retry.go:31] will retry after 1.489275436s: waiting for domain to come up
	I0127 03:02:08.167884 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:08.168365 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:08.168402 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:08.168327 1121446 retry.go:31] will retry after 1.739982698s: waiting for domain to come up
	I0127 03:02:09.910362 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:09.910871 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:09.910964 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:09.910871 1121446 retry.go:31] will retry after 2.79669233s: waiting for domain to come up
	I0127 03:02:10.574512 1119007 kubeadm.go:310] [api-check] The API server is healthy after 5.501444049s
	I0127 03:02:10.590265 1119007 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 03:02:10.612200 1119007 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 03:02:10.650305 1119007 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 03:02:10.650585 1119007 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-887091 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 03:02:10.661688 1119007 kubeadm.go:310] [bootstrap-token] Using token: 25alvo.7xrmg7nh4q5v903n
	I0127 03:02:10.663119 1119007 out.go:235]   - Configuring RBAC rules ...
	I0127 03:02:10.663280 1119007 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 03:02:10.671888 1119007 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 03:02:10.685310 1119007 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 03:02:10.690214 1119007 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 03:02:10.694363 1119007 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 03:02:10.698959 1119007 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 03:02:10.982964 1119007 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 03:02:11.430752 1119007 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 03:02:11.982446 1119007 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 03:02:11.984681 1119007 kubeadm.go:310] 
	I0127 03:02:11.984836 1119007 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 03:02:11.984859 1119007 kubeadm.go:310] 
	I0127 03:02:11.984989 1119007 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 03:02:11.985010 1119007 kubeadm.go:310] 
	I0127 03:02:11.985048 1119007 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 03:02:11.985139 1119007 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 03:02:11.985214 1119007 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 03:02:11.985223 1119007 kubeadm.go:310] 
	I0127 03:02:11.985308 1119007 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 03:02:11.985320 1119007 kubeadm.go:310] 
	I0127 03:02:11.985386 1119007 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 03:02:11.985394 1119007 kubeadm.go:310] 
	I0127 03:02:11.985466 1119007 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 03:02:11.985573 1119007 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 03:02:11.985666 1119007 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 03:02:11.985676 1119007 kubeadm.go:310] 
	I0127 03:02:11.985787 1119007 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 03:02:11.985893 1119007 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 03:02:11.985903 1119007 kubeadm.go:310] 
	I0127 03:02:11.986015 1119007 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 25alvo.7xrmg7nh4q5v903n \
	I0127 03:02:11.986154 1119007 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0bba8d4e4b3afb129d2d18e2e045cd48b3419c300ae73ce15b73c31a6c21b1ba \
	I0127 03:02:11.986187 1119007 kubeadm.go:310] 	--control-plane 
	I0127 03:02:11.986194 1119007 kubeadm.go:310] 
	I0127 03:02:11.986302 1119007 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 03:02:11.986313 1119007 kubeadm.go:310] 
	I0127 03:02:11.986421 1119007 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 25alvo.7xrmg7nh4q5v903n \
	I0127 03:02:11.986559 1119007 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0bba8d4e4b3afb129d2d18e2e045cd48b3419c300ae73ce15b73c31a6c21b1ba 
	I0127 03:02:11.988046 1119007 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 03:02:11.988085 1119007 cni.go:84] Creating CNI manager for ""
	I0127 03:02:11.988096 1119007 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 03:02:11.989984 1119007 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 03:02:11.991565 1119007 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 03:02:12.008152 1119007 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 03:02:12.031285 1119007 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 03:02:12.031368 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:12.031415 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-887091 minikube.k8s.io/updated_at=2025_01_27T03_02_12_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95 minikube.k8s.io/name=no-preload-887091 minikube.k8s.io/primary=true
	I0127 03:02:12.301916 1119007 ops.go:34] apiserver oom_adj: -16
	I0127 03:02:12.302079 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:12.802985 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:11.795142 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:14.292215 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:11.613207 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:14.098783 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:12.710060 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:12.710698 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:12.710737 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:12.710630 1121446 retry.go:31] will retry after 2.899766509s: waiting for domain to come up
	I0127 03:02:13.302566 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:13.802370 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:14.302582 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:14.802350 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:15.302355 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:15.802132 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:15.926758 1119007 kubeadm.go:1113] duration metric: took 3.895467932s to wait for elevateKubeSystemPrivileges
	I0127 03:02:15.926808 1119007 kubeadm.go:394] duration metric: took 4m35.245756492s to StartCluster
	I0127 03:02:15.926834 1119007 settings.go:142] acquiring lock: {Name:mkfac79776d8549aa482287d1af528efdec15d32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:15.926944 1119007 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20316-1057178/kubeconfig
	I0127 03:02:15.928428 1119007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-1057178/kubeconfig: {Name:mke4bd9fc891569e5d6830fdf173fa5043f6c0a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:15.928677 1119007 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.201 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 03:02:15.928795 1119007 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 03:02:15.928913 1119007 config.go:182] Loaded profile config "no-preload-887091": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 03:02:15.928932 1119007 addons.go:69] Setting metrics-server=true in profile "no-preload-887091"
	I0127 03:02:15.928966 1119007 addons.go:238] Setting addon metrics-server=true in "no-preload-887091"
	I0127 03:02:15.928977 1119007 addons.go:69] Setting dashboard=true in profile "no-preload-887091"
	W0127 03:02:15.928985 1119007 addons.go:247] addon metrics-server should already be in state true
	I0127 03:02:15.928991 1119007 addons.go:238] Setting addon dashboard=true in "no-preload-887091"
	I0127 03:02:15.928918 1119007 addons.go:69] Setting storage-provisioner=true in profile "no-preload-887091"
	I0127 03:02:15.929020 1119007 host.go:66] Checking if "no-preload-887091" exists ...
	I0127 03:02:15.929025 1119007 addons.go:238] Setting addon storage-provisioner=true in "no-preload-887091"
	W0127 03:02:15.929036 1119007 addons.go:247] addon storage-provisioner should already be in state true
	I0127 03:02:15.928961 1119007 addons.go:69] Setting default-storageclass=true in profile "no-preload-887091"
	I0127 03:02:15.929073 1119007 host.go:66] Checking if "no-preload-887091" exists ...
	I0127 03:02:15.929093 1119007 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-887091"
	W0127 03:02:15.928999 1119007 addons.go:247] addon dashboard should already be in state true
	I0127 03:02:15.929175 1119007 host.go:66] Checking if "no-preload-887091" exists ...
	I0127 03:02:15.929496 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:15.929496 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:15.929544 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:15.929557 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:15.929547 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:15.929584 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:15.929499 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:15.929692 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:15.930306 1119007 out.go:177] * Verifying Kubernetes components...
	I0127 03:02:15.931877 1119007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:02:15.952533 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34429
	I0127 03:02:15.952549 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44595
	I0127 03:02:15.952581 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46601
	I0127 03:02:15.952721 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39019
	I0127 03:02:15.954529 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:15.954547 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:15.954808 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:15.955205 1119007 main.go:141] libmachine: Using API Version  1
	I0127 03:02:15.955229 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:15.955233 1119007 main.go:141] libmachine: Using API Version  1
	I0127 03:02:15.955253 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:15.955313 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:15.955413 1119007 main.go:141] libmachine: Using API Version  1
	I0127 03:02:15.955437 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:15.955766 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:15.955849 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:15.955886 1119007 main.go:141] libmachine: Using API Version  1
	I0127 03:02:15.955947 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:15.956424 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:15.956463 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:15.956469 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:15.956507 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:15.956724 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:15.956927 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetState
	I0127 03:02:15.957100 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:15.957708 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:15.957746 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:15.960884 1119007 addons.go:238] Setting addon default-storageclass=true in "no-preload-887091"
	W0127 03:02:15.960910 1119007 addons.go:247] addon default-storageclass should already be in state true
	I0127 03:02:15.960960 1119007 host.go:66] Checking if "no-preload-887091" exists ...
	I0127 03:02:15.961323 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:15.961366 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:15.977560 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43979
	I0127 03:02:15.978028 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:15.978173 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42633
	I0127 03:02:15.978517 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46571
	I0127 03:02:15.978693 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:15.978872 1119007 main.go:141] libmachine: Using API Version  1
	I0127 03:02:15.978901 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:15.979226 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:15.979298 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:15.979562 1119007 main.go:141] libmachine: Using API Version  1
	I0127 03:02:15.979576 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetState
	I0127 03:02:15.979593 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:15.979923 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:15.980113 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetState
	I0127 03:02:15.980289 1119007 main.go:141] libmachine: Using API Version  1
	I0127 03:02:15.980304 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:15.980894 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:15.981251 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetState
	I0127 03:02:15.981811 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
	I0127 03:02:15.982385 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
	I0127 03:02:15.983016 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
	I0127 03:02:15.983162 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42601
	I0127 03:02:15.983756 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:15.983837 1119007 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 03:02:15.984185 1119007 main.go:141] libmachine: Using API Version  1
	I0127 03:02:15.984202 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:15.984606 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:15.985117 1119007 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 03:02:15.985204 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:15.985237 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:15.985253 1119007 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:02:15.985273 1119007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 03:02:15.985297 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
	I0127 03:02:15.985367 1119007 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 03:02:15.986458 1119007 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 03:02:15.986480 1119007 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 03:02:15.986546 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
	I0127 03:02:15.987599 1119007 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 03:02:15.988812 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 03:02:15.988933 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 03:02:15.989273 1119007 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 03:02:15.989471 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
	I0127 03:02:15.989502 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 03:02:15.989571 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
	I0127 03:02:15.989716 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
	I0127 03:02:15.989884 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
	I0127 03:02:15.990033 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
	I0127 03:02:15.990172 1119007 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/no-preload-887091/id_rsa Username:docker}
	I0127 03:02:15.990858 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 03:02:15.991445 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
	I0127 03:02:15.991468 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 03:02:15.991628 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
	I0127 03:02:15.991828 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
	I0127 03:02:15.992248 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
	I0127 03:02:15.992428 1119007 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/no-preload-887091/id_rsa Username:docker}
	I0127 03:02:15.993703 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 03:02:15.994218 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
	I0127 03:02:15.994244 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 03:02:15.994557 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
	I0127 03:02:15.994742 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
	I0127 03:02:15.994902 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
	I0127 03:02:15.995042 1119007 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/no-preload-887091/id_rsa Username:docker}
	I0127 03:02:16.004890 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45539
	I0127 03:02:16.005324 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:16.005841 1119007 main.go:141] libmachine: Using API Version  1
	I0127 03:02:16.005861 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:16.006249 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:16.006454 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetState
	I0127 03:02:16.008475 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
	I0127 03:02:16.008706 1119007 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 03:02:16.008719 1119007 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 03:02:16.008733 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
	I0127 03:02:16.011722 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 03:02:16.012561 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
	I0127 03:02:16.012637 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
	I0127 03:02:16.012663 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 03:02:16.012777 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
	I0127 03:02:16.012973 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
	I0127 03:02:16.013155 1119007 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/no-preload-887091/id_rsa Username:docker}
	I0127 03:02:16.171165 1119007 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 03:02:16.193562 1119007 node_ready.go:35] waiting up to 6m0s for node "no-preload-887091" to be "Ready" ...
	I0127 03:02:16.246946 1119007 node_ready.go:49] node "no-preload-887091" has status "Ready":"True"
	I0127 03:02:16.246978 1119007 node_ready.go:38] duration metric: took 53.383421ms for node "no-preload-887091" to be "Ready" ...
	I0127 03:02:16.246992 1119007 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:02:16.274293 1119007 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-86j6q" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:16.274621 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 03:02:16.274647 1119007 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 03:02:16.305232 1119007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 03:02:16.327479 1119007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:02:16.328118 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 03:02:16.328136 1119007 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 03:02:16.428329 1119007 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 03:02:16.428364 1119007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 03:02:16.466201 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 03:02:16.466236 1119007 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 03:02:16.599271 1119007 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 03:02:16.599315 1119007 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 03:02:16.638608 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 03:02:16.638637 1119007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 03:02:16.828108 1119007 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 03:02:16.828150 1119007 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 03:02:16.838645 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 03:02:16.838676 1119007 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 03:02:16.984773 1119007 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:16.984808 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
	I0127 03:02:16.985269 1119007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:16.985286 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:16.985295 1119007 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:16.985302 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
	I0127 03:02:16.985629 1119007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:16.985649 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:17.004424 1119007 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:17.004447 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
	I0127 03:02:17.004789 1119007 main.go:141] libmachine: (no-preload-887091) DBG | Closing plugin on server side
	I0127 03:02:17.004799 1119007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:17.004830 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:17.011294 1119007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 03:02:17.011605 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 03:02:17.011624 1119007 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 03:02:17.109457 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 03:02:17.109494 1119007 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 03:02:17.218037 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 03:02:17.218071 1119007 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 03:02:17.272264 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 03:02:17.272299 1119007 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 03:02:17.346698 1119007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 03:02:17.903867 1119007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.57633993s)
	I0127 03:02:17.903940 1119007 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:17.903958 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
	I0127 03:02:17.904299 1119007 main.go:141] libmachine: (no-preload-887091) DBG | Closing plugin on server side
	I0127 03:02:17.904382 1119007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:17.904399 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:17.904412 1119007 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:17.904418 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
	I0127 03:02:17.904680 1119007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:17.904702 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:17.904715 1119007 main.go:141] libmachine: (no-preload-887091) DBG | Closing plugin on server side
	I0127 03:02:18.291876 1119007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.280535535s)
	I0127 03:02:18.291939 1119007 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:18.291962 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
	I0127 03:02:18.292296 1119007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:18.292315 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:18.292323 1119007 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:18.292329 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
	I0127 03:02:18.293045 1119007 main.go:141] libmachine: (no-preload-887091) DBG | Closing plugin on server side
	I0127 03:02:18.293120 1119007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:18.293147 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:18.293165 1119007 addons.go:479] Verifying addon metrics-server=true in "no-preload-887091"
	I0127 03:02:18.308148 1119007 pod_ready.go:103] pod "coredns-668d6bf9bc-86j6q" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:19.202588 1119007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.855830221s)
	I0127 03:02:19.202668 1119007 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:19.202685 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
	I0127 03:02:19.202996 1119007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:19.203014 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:19.203031 1119007 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:19.203046 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
	I0127 03:02:19.203365 1119007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:19.203408 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:19.205207 1119007 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-887091 addons enable metrics-server
	
	I0127 03:02:19.206884 1119007 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0127 03:02:16.293451 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:18.793149 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:19.785753 1119263 pod_ready.go:82] duration metric: took 4m0.001003583s for pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace to be "Ready" ...
	E0127 03:02:19.785781 1119263 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 03:02:19.785801 1119263 pod_ready.go:39] duration metric: took 4m12.565302655s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:02:19.785832 1119263 kubeadm.go:597] duration metric: took 4m20.078127881s to restartPrimaryControlPlane
	W0127 03:02:19.785891 1119263 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 03:02:19.785918 1119263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 03:02:16.101837 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:18.600416 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:15.612007 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:15.612503 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:15.612532 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:15.612477 1121446 retry.go:31] will retry after 4.281984487s: waiting for domain to come up
	I0127 03:02:19.898517 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:19.899156 1121411 main.go:141] libmachine: (newest-cni-642127) found domain IP: 192.168.50.51
	I0127 03:02:19.899184 1121411 main.go:141] libmachine: (newest-cni-642127) reserving static IP address...
	I0127 03:02:19.899199 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has current primary IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:19.899706 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "newest-cni-642127", mac: "52:54:00:b2:c0:f5", ip: "192.168.50.51"} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:19.899748 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | skip adding static IP to network mk-newest-cni-642127 - found existing host DHCP lease matching {name: "newest-cni-642127", mac: "52:54:00:b2:c0:f5", ip: "192.168.50.51"}
	I0127 03:02:19.899765 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | Getting to WaitForSSH function...
	I0127 03:02:19.899786 1121411 main.go:141] libmachine: (newest-cni-642127) reserved static IP address 192.168.50.51 for domain newest-cni-642127
	I0127 03:02:19.899794 1121411 main.go:141] libmachine: (newest-cni-642127) waiting for SSH...
	I0127 03:02:19.902680 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:19.903077 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:19.903108 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:19.903425 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | Using SSH client type: external
	I0127 03:02:19.903455 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | Using SSH private key: /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa (-rw-------)
	I0127 03:02:19.903497 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 03:02:19.903528 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | About to run SSH command:
	I0127 03:02:19.903545 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | exit 0
	I0127 03:02:20.033236 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | SSH cmd err, output: <nil>: 
	I0127 03:02:20.033650 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetConfigRaw
	I0127 03:02:20.034423 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetIP
	I0127 03:02:20.037477 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.038000 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:20.038034 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.038292 1121411 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127/config.json ...
	I0127 03:02:20.038569 1121411 machine.go:93] provisionDockerMachine start ...
	I0127 03:02:20.038593 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:20.038817 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:20.041604 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.042029 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:20.042058 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.042374 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
	I0127 03:02:20.042730 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:20.042972 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:20.043158 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
	I0127 03:02:20.043362 1121411 main.go:141] libmachine: Using SSH client type: native
	I0127 03:02:20.043631 1121411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0127 03:02:20.043646 1121411 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 03:02:20.162052 1121411 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 03:02:20.162088 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetMachineName
	I0127 03:02:20.162389 1121411 buildroot.go:166] provisioning hostname "newest-cni-642127"
	I0127 03:02:20.162416 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetMachineName
	I0127 03:02:20.162603 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:20.166195 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.166703 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:20.166735 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.167015 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
	I0127 03:02:20.167255 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:20.167440 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:20.167629 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
	I0127 03:02:20.167847 1121411 main.go:141] libmachine: Using SSH client type: native
	I0127 03:02:20.168082 1121411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0127 03:02:20.168098 1121411 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-642127 && echo "newest-cni-642127" | sudo tee /etc/hostname
	I0127 03:02:19.208319 1119007 addons.go:514] duration metric: took 3.279531879s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0127 03:02:20.784826 1119007 pod_ready.go:103] pod "coredns-668d6bf9bc-86j6q" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:20.304578 1121411 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-642127
	
	I0127 03:02:20.304614 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:20.307961 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.308494 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:20.308576 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.308725 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
	I0127 03:02:20.308929 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:20.309194 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:20.309354 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
	I0127 03:02:20.309604 1121411 main.go:141] libmachine: Using SSH client type: native
	I0127 03:02:20.309846 1121411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0127 03:02:20.309865 1121411 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-642127' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-642127/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-642127' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 03:02:20.431545 1121411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 03:02:20.431586 1121411 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20316-1057178/.minikube CaCertPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20316-1057178/.minikube}
	I0127 03:02:20.431617 1121411 buildroot.go:174] setting up certificates
	I0127 03:02:20.431633 1121411 provision.go:84] configureAuth start
	I0127 03:02:20.431649 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetMachineName
	I0127 03:02:20.431999 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetIP
	I0127 03:02:20.435425 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.435885 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:20.435918 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.436172 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:20.439389 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.439969 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:20.440002 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.440288 1121411 provision.go:143] copyHostCerts
	I0127 03:02:20.440368 1121411 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.pem, removing ...
	I0127 03:02:20.440392 1121411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.pem
	I0127 03:02:20.440475 1121411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.pem (1078 bytes)
	I0127 03:02:20.440610 1121411 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-1057178/.minikube/cert.pem, removing ...
	I0127 03:02:20.440672 1121411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-1057178/.minikube/cert.pem
	I0127 03:02:20.440724 1121411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20316-1057178/.minikube/cert.pem (1123 bytes)
	I0127 03:02:20.440826 1121411 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-1057178/.minikube/key.pem, removing ...
	I0127 03:02:20.440838 1121411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-1057178/.minikube/key.pem
	I0127 03:02:20.440872 1121411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20316-1057178/.minikube/key.pem (1675 bytes)
	I0127 03:02:20.441000 1121411 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca-key.pem org=jenkins.newest-cni-642127 san=[127.0.0.1 192.168.50.51 localhost minikube newest-cni-642127]
	I0127 03:02:20.582957 1121411 provision.go:177] copyRemoteCerts
	I0127 03:02:20.583042 1121411 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 03:02:20.583082 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:20.586468 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.586937 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:20.586967 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.587297 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
	I0127 03:02:20.587493 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:20.587653 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
	I0127 03:02:20.587816 1121411 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa Username:docker}
	I0127 03:02:20.678286 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 03:02:20.710984 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0127 03:02:20.743521 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 03:02:20.776342 1121411 provision.go:87] duration metric: took 344.690364ms to configureAuth
	I0127 03:02:20.776390 1121411 buildroot.go:189] setting minikube options for container-runtime
	I0127 03:02:20.776645 1121411 config.go:182] Loaded profile config "newest-cni-642127": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 03:02:20.776665 1121411 machine.go:96] duration metric: took 738.080097ms to provisionDockerMachine
	I0127 03:02:20.776676 1121411 start.go:293] postStartSetup for "newest-cni-642127" (driver="kvm2")
	I0127 03:02:20.776689 1121411 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 03:02:20.776728 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:20.777166 1121411 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 03:02:20.777201 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:20.781262 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.781754 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:20.781782 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.782169 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
	I0127 03:02:20.782409 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:20.782633 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
	I0127 03:02:20.782886 1121411 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa Username:docker}
	I0127 03:02:20.877090 1121411 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 03:02:20.882893 1121411 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 03:02:20.882941 1121411 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-1057178/.minikube/addons for local assets ...
	I0127 03:02:20.883012 1121411 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-1057178/.minikube/files for local assets ...
	I0127 03:02:20.883121 1121411 filesync.go:149] local asset: /home/jenkins/minikube-integration/20316-1057178/.minikube/files/etc/ssl/certs/10644392.pem -> 10644392.pem in /etc/ssl/certs
	I0127 03:02:20.883262 1121411 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 03:02:20.897501 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/files/etc/ssl/certs/10644392.pem --> /etc/ssl/certs/10644392.pem (1708 bytes)
	I0127 03:02:20.927044 1121411 start.go:296] duration metric: took 150.330171ms for postStartSetup
	I0127 03:02:20.927103 1121411 fix.go:56] duration metric: took 20.579822967s for fixHost
	I0127 03:02:20.927133 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:20.930644 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.931093 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:20.931129 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.931414 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
	I0127 03:02:20.931717 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:20.931919 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:20.932105 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
	I0127 03:02:20.932280 1121411 main.go:141] libmachine: Using SSH client type: native
	I0127 03:02:20.932530 1121411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0127 03:02:20.932545 1121411 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 03:02:21.046461 1121411 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737946941.010071066
	
	I0127 03:02:21.046493 1121411 fix.go:216] guest clock: 1737946941.010071066
	I0127 03:02:21.046504 1121411 fix.go:229] Guest: 2025-01-27 03:02:21.010071066 +0000 UTC Remote: 2025-01-27 03:02:20.927108919 +0000 UTC m=+20.729857739 (delta=82.962147ms)
	I0127 03:02:21.046536 1121411 fix.go:200] guest clock delta is within tolerance: 82.962147ms
	I0127 03:02:21.046543 1121411 start.go:83] releasing machines lock for "newest-cni-642127", held for 20.699275534s
	I0127 03:02:21.046580 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:21.046929 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetIP
	I0127 03:02:21.050101 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:21.050549 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:21.050572 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:21.050930 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:21.051682 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:21.051910 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:21.052040 1121411 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 03:02:21.052128 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:21.052184 1121411 ssh_runner.go:195] Run: cat /version.json
	I0127 03:02:21.052219 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:21.055762 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:21.055836 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:21.056356 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:21.056389 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:21.056429 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:21.056447 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:21.056720 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
	I0127 03:02:21.056899 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:21.056974 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
	I0127 03:02:21.057099 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:21.057147 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
	I0127 03:02:21.057303 1121411 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa Username:docker}
	I0127 03:02:21.057708 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
	I0127 03:02:21.057902 1121411 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa Username:docker}
	I0127 03:02:21.169709 1121411 ssh_runner.go:195] Run: systemctl --version
	I0127 03:02:21.177622 1121411 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 03:02:21.184029 1121411 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 03:02:21.184112 1121411 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 03:02:21.202861 1121411 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 03:02:21.202890 1121411 start.go:495] detecting cgroup driver to use...
	I0127 03:02:21.202967 1121411 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 03:02:21.236110 1121411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 03:02:21.250683 1121411 docker.go:217] disabling cri-docker service (if available) ...
	I0127 03:02:21.250796 1121411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 03:02:21.266354 1121411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 03:02:21.284146 1121411 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 03:02:21.436406 1121411 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 03:02:21.620560 1121411 docker.go:233] disabling docker service ...
	I0127 03:02:21.620655 1121411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 03:02:21.639534 1121411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 03:02:21.657179 1121411 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 03:02:21.828676 1121411 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 03:02:21.993891 1121411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 03:02:22.011124 1121411 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 03:02:22.037734 1121411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 03:02:22.049863 1121411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 03:02:22.064327 1121411 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 03:02:22.064427 1121411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 03:02:22.080328 1121411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 03:02:22.093806 1121411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 03:02:22.106165 1121411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 03:02:22.117782 1121411 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 03:02:22.129650 1121411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 03:02:22.152872 1121411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 03:02:22.165020 1121411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 03:02:22.177918 1121411 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 03:02:22.188259 1121411 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 03:02:22.188355 1121411 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 03:02:22.204350 1121411 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 03:02:22.218093 1121411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:02:22.356619 1121411 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 03:02:22.385087 1121411 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0127 03:02:22.385172 1121411 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 03:02:22.389980 1121411 retry.go:31] will retry after 758.524819ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0127 03:02:23.148722 1121411 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 03:02:23.154533 1121411 start.go:563] Will wait 60s for crictl version
	I0127 03:02:23.154611 1121411 ssh_runner.go:195] Run: which crictl
	I0127 03:02:23.159040 1121411 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 03:02:23.200478 1121411 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0127 03:02:23.200579 1121411 ssh_runner.go:195] Run: containerd --version
	I0127 03:02:23.228424 1121411 ssh_runner.go:195] Run: containerd --version
	I0127 03:02:23.265392 1121411 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	I0127 03:02:23.266856 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetIP
	I0127 03:02:23.269741 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:23.270196 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:23.270231 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:23.270441 1121411 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0127 03:02:23.275461 1121411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 03:02:23.294081 1121411 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0127 03:02:21.866190 1119263 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.080241643s)
	I0127 03:02:21.866293 1119263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 03:02:21.886667 1119263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 03:02:21.901554 1119263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 03:02:21.915270 1119263 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 03:02:21.915296 1119263 kubeadm.go:157] found existing configuration files:
	
	I0127 03:02:21.915369 1119263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 03:02:21.929169 1119263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 03:02:21.929294 1119263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 03:02:21.942913 1119263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 03:02:21.956444 1119263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 03:02:21.956522 1119263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 03:02:21.970342 1119263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 03:02:21.989145 1119263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 03:02:21.989232 1119263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 03:02:22.001913 1119263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 03:02:22.013198 1119263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 03:02:22.013270 1119263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 03:02:22.026131 1119263 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 03:02:22.226370 1119263 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 03:02:20.601947 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:22.605621 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:23.295574 1121411 kubeadm.go:883] updating cluster {Name:newest-cni-642127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-642127 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 03:02:23.295756 1121411 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 03:02:23.295841 1121411 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 03:02:23.331579 1121411 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 03:02:23.331604 1121411 containerd.go:534] Images already preloaded, skipping extraction
	I0127 03:02:23.331661 1121411 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 03:02:23.368818 1121411 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 03:02:23.368848 1121411 cache_images.go:84] Images are preloaded, skipping loading
	I0127 03:02:23.368856 1121411 kubeadm.go:934] updating node { 192.168.50.51 8443 v1.32.1 containerd true true} ...
	I0127 03:02:23.369012 1121411 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-642127 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:newest-cni-642127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 03:02:23.369101 1121411 ssh_runner.go:195] Run: sudo crictl info
	I0127 03:02:23.405913 1121411 cni.go:84] Creating CNI manager for ""
	I0127 03:02:23.405949 1121411 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 03:02:23.405966 1121411 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0127 03:02:23.406015 1121411 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.51 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-642127 NodeName:newest-cni-642127 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 03:02:23.406210 1121411 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-642127"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.51"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.51"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 03:02:23.406291 1121411 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 03:02:23.418253 1121411 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 03:02:23.418339 1121411 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 03:02:23.431397 1121411 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0127 03:02:23.452908 1121411 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 03:02:23.474059 1121411 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I0127 03:02:23.494976 1121411 ssh_runner.go:195] Run: grep 192.168.50.51	control-plane.minikube.internal$ /etc/hosts
	I0127 03:02:23.499246 1121411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 03:02:23.512541 1121411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:02:23.648564 1121411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 03:02:23.667204 1121411 certs.go:68] Setting up /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127 for IP: 192.168.50.51
	I0127 03:02:23.667230 1121411 certs.go:194] generating shared ca certs ...
	I0127 03:02:23.667265 1121411 certs.go:226] acquiring lock for ca certs: {Name:mk567acc23cbe907605c03a2ec03c8e4859e8343 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:23.667447 1121411 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.key
	I0127 03:02:23.667526 1121411 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/proxy-client-ca.key
	I0127 03:02:23.667540 1121411 certs.go:256] generating profile certs ...
	I0127 03:02:23.667681 1121411 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127/client.key
	I0127 03:02:23.667777 1121411 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127/apiserver.key.fe27a200
	I0127 03:02:23.667863 1121411 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127/proxy-client.key
	I0127 03:02:23.668017 1121411 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/1064439.pem (1338 bytes)
	W0127 03:02:23.668071 1121411 certs.go:480] ignoring /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/1064439_empty.pem, impossibly tiny 0 bytes
	I0127 03:02:23.668085 1121411 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 03:02:23.668115 1121411 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem (1078 bytes)
	I0127 03:02:23.668143 1121411 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/cert.pem (1123 bytes)
	I0127 03:02:23.668177 1121411 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/key.pem (1675 bytes)
	I0127 03:02:23.668261 1121411 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/files/etc/ssl/certs/10644392.pem (1708 bytes)
	I0127 03:02:23.669195 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 03:02:23.715219 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 03:02:23.757555 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 03:02:23.797303 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 03:02:23.839764 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0127 03:02:23.889721 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 03:02:23.923393 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 03:02:23.953947 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 03:02:23.983760 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 03:02:24.016899 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/1064439.pem --> /usr/share/ca-certificates/1064439.pem (1338 bytes)
	I0127 03:02:24.060186 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/files/etc/ssl/certs/10644392.pem --> /usr/share/ca-certificates/10644392.pem (1708 bytes)
	I0127 03:02:24.099215 1121411 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 03:02:24.120841 1121411 ssh_runner.go:195] Run: openssl version
	I0127 03:02:24.127163 1121411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 03:02:24.139725 1121411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:02:24.144911 1121411 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 01:43 /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:02:24.145000 1121411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:02:24.153545 1121411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 03:02:24.167817 1121411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1064439.pem && ln -fs /usr/share/ca-certificates/1064439.pem /etc/ssl/certs/1064439.pem"
	I0127 03:02:24.182019 1121411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1064439.pem
	I0127 03:02:24.188811 1121411 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 01:51 /usr/share/ca-certificates/1064439.pem
	I0127 03:02:24.188883 1121411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1064439.pem
	I0127 03:02:24.196999 1121411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1064439.pem /etc/ssl/certs/51391683.0"
	I0127 03:02:24.209518 1121411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10644392.pem && ln -fs /usr/share/ca-certificates/10644392.pem /etc/ssl/certs/10644392.pem"
	I0127 03:02:24.221497 1121411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10644392.pem
	I0127 03:02:24.226538 1121411 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 01:51 /usr/share/ca-certificates/10644392.pem
	I0127 03:02:24.226618 1121411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10644392.pem
	I0127 03:02:24.233572 1121411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10644392.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 03:02:24.245296 1121411 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 03:02:24.250242 1121411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 03:02:24.256818 1121411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 03:02:24.264939 1121411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 03:02:24.272818 1121411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 03:02:24.280734 1121411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 03:02:24.289169 1121411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 03:02:24.296827 1121411 kubeadm.go:392] StartCluster: {Name:newest-cni-642127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-642127 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: S
ubnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 03:02:24.297003 1121411 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0127 03:02:24.297095 1121411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 03:02:24.345692 1121411 cri.go:89] found id: "a6a8a2ba8bcb96c16e32fee70afcff2fb691cfba8dc6b1d17319c1af0fb57e5c"
	I0127 03:02:24.345721 1121411 cri.go:89] found id: "2875317dc160c0cbfcb5f0fffa03054fa2f58e5ac3a8c285da1e902b27ff47ba"
	I0127 03:02:24.345726 1121411 cri.go:89] found id: "1cf120eb9e1f79c0f94b2450693e0ddb3e2be97f570ea7c1bd076d78e161f63f"
	I0127 03:02:24.345731 1121411 cri.go:89] found id: "7ed46ffcaf84e6803aa42840b6d1ad2e881baaab16a17a4ce2b4937e53de42cc"
	I0127 03:02:24.345736 1121411 cri.go:89] found id: "f7635245edb3222b276ec6cb742d2e37ae2d21613eef3d959d8e42317a2e1c71"
	I0127 03:02:24.345741 1121411 cri.go:89] found id: "6190f3df5366129319feab6d40d56f4b615cb6f059c4b8e91512bbd1b3943c19"
	I0127 03:02:24.345745 1121411 cri.go:89] found id: "22958da5ca6d5bc9ed8ce5b964ecf90f4ffa68a09d4b9760a64cb0233948db0a"
	I0127 03:02:24.345749 1121411 cri.go:89] found id: "6608f968238d3b18e99b5ae9b674c1a64c96b2d7b63769f917c4edc895804df3"
	I0127 03:02:24.345753 1121411 cri.go:89] found id: ""
	I0127 03:02:24.345806 1121411 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0127 03:02:24.363134 1121411 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-27T03:02:24Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0127 03:02:24.363233 1121411 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 03:02:24.377414 1121411 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 03:02:24.377441 1121411 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 03:02:24.377512 1121411 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 03:02:24.391116 1121411 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 03:02:24.392658 1121411 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-642127" does not appear in /home/jenkins/minikube-integration/20316-1057178/kubeconfig
	I0127 03:02:24.393662 1121411 kubeconfig.go:62] /home/jenkins/minikube-integration/20316-1057178/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-642127" cluster setting kubeconfig missing "newest-cni-642127" context setting]
	I0127 03:02:24.395074 1121411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-1057178/kubeconfig: {Name:mke4bd9fc891569e5d6830fdf173fa5043f6c0a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:24.406122 1121411 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 03:02:24.417412 1121411 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.51
	I0127 03:02:24.417457 1121411 kubeadm.go:1160] stopping kube-system containers ...
	I0127 03:02:24.417475 1121411 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0127 03:02:24.417545 1121411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 03:02:24.459011 1121411 cri.go:89] found id: "a6a8a2ba8bcb96c16e32fee70afcff2fb691cfba8dc6b1d17319c1af0fb57e5c"
	I0127 03:02:24.459043 1121411 cri.go:89] found id: "2875317dc160c0cbfcb5f0fffa03054fa2f58e5ac3a8c285da1e902b27ff47ba"
	I0127 03:02:24.459049 1121411 cri.go:89] found id: "1cf120eb9e1f79c0f94b2450693e0ddb3e2be97f570ea7c1bd076d78e161f63f"
	I0127 03:02:24.459055 1121411 cri.go:89] found id: "7ed46ffcaf84e6803aa42840b6d1ad2e881baaab16a17a4ce2b4937e53de42cc"
	I0127 03:02:24.459059 1121411 cri.go:89] found id: "f7635245edb3222b276ec6cb742d2e37ae2d21613eef3d959d8e42317a2e1c71"
	I0127 03:02:24.459065 1121411 cri.go:89] found id: "6190f3df5366129319feab6d40d56f4b615cb6f059c4b8e91512bbd1b3943c19"
	I0127 03:02:24.459069 1121411 cri.go:89] found id: "22958da5ca6d5bc9ed8ce5b964ecf90f4ffa68a09d4b9760a64cb0233948db0a"
	I0127 03:02:24.459074 1121411 cri.go:89] found id: "6608f968238d3b18e99b5ae9b674c1a64c96b2d7b63769f917c4edc895804df3"
	I0127 03:02:24.459079 1121411 cri.go:89] found id: ""
	I0127 03:02:24.459085 1121411 cri.go:252] Stopping containers: [a6a8a2ba8bcb96c16e32fee70afcff2fb691cfba8dc6b1d17319c1af0fb57e5c 2875317dc160c0cbfcb5f0fffa03054fa2f58e5ac3a8c285da1e902b27ff47ba 1cf120eb9e1f79c0f94b2450693e0ddb3e2be97f570ea7c1bd076d78e161f63f 7ed46ffcaf84e6803aa42840b6d1ad2e881baaab16a17a4ce2b4937e53de42cc f7635245edb3222b276ec6cb742d2e37ae2d21613eef3d959d8e42317a2e1c71 6190f3df5366129319feab6d40d56f4b615cb6f059c4b8e91512bbd1b3943c19 22958da5ca6d5bc9ed8ce5b964ecf90f4ffa68a09d4b9760a64cb0233948db0a 6608f968238d3b18e99b5ae9b674c1a64c96b2d7b63769f917c4edc895804df3]
	I0127 03:02:24.459142 1121411 ssh_runner.go:195] Run: which crictl
	I0127 03:02:24.463700 1121411 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 a6a8a2ba8bcb96c16e32fee70afcff2fb691cfba8dc6b1d17319c1af0fb57e5c 2875317dc160c0cbfcb5f0fffa03054fa2f58e5ac3a8c285da1e902b27ff47ba 1cf120eb9e1f79c0f94b2450693e0ddb3e2be97f570ea7c1bd076d78e161f63f 7ed46ffcaf84e6803aa42840b6d1ad2e881baaab16a17a4ce2b4937e53de42cc f7635245edb3222b276ec6cb742d2e37ae2d21613eef3d959d8e42317a2e1c71 6190f3df5366129319feab6d40d56f4b615cb6f059c4b8e91512bbd1b3943c19 22958da5ca6d5bc9ed8ce5b964ecf90f4ffa68a09d4b9760a64cb0233948db0a 6608f968238d3b18e99b5ae9b674c1a64c96b2d7b63769f917c4edc895804df3
	I0127 03:02:24.514136 1121411 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 03:02:24.533173 1121411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 03:02:24.546127 1121411 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 03:02:24.546153 1121411 kubeadm.go:157] found existing configuration files:
	
	I0127 03:02:24.546208 1121411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 03:02:24.557350 1121411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 03:02:24.557425 1121411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 03:02:24.568241 1121411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 03:02:24.579187 1121411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 03:02:24.579283 1121411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 03:02:24.590554 1121411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 03:02:24.603551 1121411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 03:02:24.603617 1121411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 03:02:24.617395 1121411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 03:02:24.630452 1121411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 03:02:24.630532 1121411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 03:02:24.642268 1121411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 03:02:24.652281 1121411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 03:02:24.829811 1121411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 03:02:23.282142 1119007 pod_ready.go:103] pod "coredns-668d6bf9bc-86j6q" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:25.286311 1119007 pod_ready.go:93] pod "coredns-668d6bf9bc-86j6q" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:25.286348 1119007 pod_ready.go:82] duration metric: took 9.012019717s for pod "coredns-668d6bf9bc-86j6q" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.286363 1119007 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-fk8cw" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.296155 1119007 pod_ready.go:93] pod "coredns-668d6bf9bc-fk8cw" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:25.296266 1119007 pod_ready.go:82] duration metric: took 9.891475ms for pod "coredns-668d6bf9bc-fk8cw" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.296304 1119007 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.306424 1119007 pod_ready.go:93] pod "etcd-no-preload-887091" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:25.306520 1119007 pod_ready.go:82] duration metric: took 10.178061ms for pod "etcd-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.306550 1119007 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.316320 1119007 pod_ready.go:93] pod "kube-apiserver-no-preload-887091" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:25.316353 1119007 pod_ready.go:82] duration metric: took 9.779811ms for pod "kube-apiserver-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.316368 1119007 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.324972 1119007 pod_ready.go:93] pod "kube-controller-manager-no-preload-887091" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:25.324998 1119007 pod_ready.go:82] duration metric: took 8.620263ms for pod "kube-controller-manager-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.325011 1119007 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-45pz6" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.682761 1119007 pod_ready.go:93] pod "kube-proxy-45pz6" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:25.682792 1119007 pod_ready.go:82] duration metric: took 357.773408ms for pod "kube-proxy-45pz6" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.682807 1119007 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:26.086323 1119007 pod_ready.go:93] pod "kube-scheduler-no-preload-887091" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:26.086365 1119007 pod_ready.go:82] duration metric: took 403.548355ms for pod "kube-scheduler-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:26.086378 1119007 pod_ready.go:39] duration metric: took 9.839373235s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:02:26.086398 1119007 api_server.go:52] waiting for apiserver process to appear ...
	I0127 03:02:26.086493 1119007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:26.115441 1119007 api_server.go:72] duration metric: took 10.186729821s to wait for apiserver process to appear ...
	I0127 03:02:26.115474 1119007 api_server.go:88] waiting for apiserver healthz status ...
	I0127 03:02:26.115503 1119007 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0127 03:02:26.125822 1119007 api_server.go:279] https://192.168.61.201:8443/healthz returned 200:
	ok
	I0127 03:02:26.127247 1119007 api_server.go:141] control plane version: v1.32.1
	I0127 03:02:26.127277 1119007 api_server.go:131] duration metric: took 11.792506ms to wait for apiserver health ...
	I0127 03:02:26.127289 1119007 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 03:02:26.285021 1119007 system_pods.go:59] 9 kube-system pods found
	I0127 03:02:26.285059 1119007 system_pods.go:61] "coredns-668d6bf9bc-86j6q" [9b85ae79-ae19-4cd1-a0da-0343c9e2801c] Running
	I0127 03:02:26.285067 1119007 system_pods.go:61] "coredns-668d6bf9bc-fk8cw" [c7075b92-233d-4a5a-b864-ef349d7125e7] Running
	I0127 03:02:26.285073 1119007 system_pods.go:61] "etcd-no-preload-887091" [45d4a5fc-797f-4d4a-9204-049ebcdc5647] Running
	I0127 03:02:26.285079 1119007 system_pods.go:61] "kube-apiserver-no-preload-887091" [11e7ea14-678a-408f-a722-8fedb984c086] Running
	I0127 03:02:26.285085 1119007 system_pods.go:61] "kube-controller-manager-no-preload-887091" [95d63381-33aa-428b-80b1-6e8ccf96b8a1] Running
	I0127 03:02:26.285089 1119007 system_pods.go:61] "kube-proxy-45pz6" [b3aa986f-d6d8-4050-8760-438aabd39bdc] Running
	I0127 03:02:26.285094 1119007 system_pods.go:61] "kube-scheduler-no-preload-887091" [5065d24f-256d-43ad-bd00-1d5868b7214d] Running
	I0127 03:02:26.285104 1119007 system_pods.go:61] "metrics-server-f79f97bbb-vshg4" [33ae36ed-d8a4-4d60-bcd0-1becf2d490bc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 03:02:26.285110 1119007 system_pods.go:61] "storage-provisioner" [127a1f13-b70c-4482-bd8b-14a6bf24b663] Running
	I0127 03:02:26.285121 1119007 system_pods.go:74] duration metric: took 157.824017ms to wait for pod list to return data ...
	I0127 03:02:26.285134 1119007 default_sa.go:34] waiting for default service account to be created ...
	I0127 03:02:26.480092 1119007 default_sa.go:45] found service account: "default"
	I0127 03:02:26.480128 1119007 default_sa.go:55] duration metric: took 194.984911ms for default service account to be created ...
	I0127 03:02:26.480141 1119007 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 03:02:26.688727 1119007 system_pods.go:87] 9 kube-system pods found
	I0127 03:02:25.099839 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:27.100451 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:29.599652 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:26.158504 1121411 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.328648156s)
	I0127 03:02:26.158550 1121411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 03:02:26.404894 1121411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 03:02:26.526530 1121411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 03:02:26.667432 1121411 api_server.go:52] waiting for apiserver process to appear ...
	I0127 03:02:26.667635 1121411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:27.167965 1121411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:27.667769 1121411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:27.702851 1121411 api_server.go:72] duration metric: took 1.03541528s to wait for apiserver process to appear ...
	I0127 03:02:27.702957 1121411 api_server.go:88] waiting for apiserver healthz status ...
	I0127 03:02:27.702996 1121411 api_server.go:253] Checking apiserver healthz at https://192.168.50.51:8443/healthz ...
	I0127 03:02:27.703762 1121411 api_server.go:269] stopped: https://192.168.50.51:8443/healthz: Get "https://192.168.50.51:8443/healthz": dial tcp 192.168.50.51:8443: connect: connection refused
	I0127 03:02:28.203377 1121411 api_server.go:253] Checking apiserver healthz at https://192.168.50.51:8443/healthz ...
	I0127 03:02:28.204135 1121411 api_server.go:269] stopped: https://192.168.50.51:8443/healthz: Get "https://192.168.50.51:8443/healthz": dial tcp 192.168.50.51:8443: connect: connection refused
	I0127 03:02:28.703884 1121411 api_server.go:253] Checking apiserver healthz at https://192.168.50.51:8443/healthz ...
	I0127 03:02:32.408333 1119263 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 03:02:32.408420 1119263 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 03:02:32.408564 1119263 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 03:02:32.408723 1119263 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 03:02:32.408850 1119263 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 03:02:32.408936 1119263 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 03:02:32.410600 1119263 out.go:235]   - Generating certificates and keys ...
	I0127 03:02:32.410694 1119263 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 03:02:32.410784 1119263 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 03:02:32.410899 1119263 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 03:02:32.410985 1119263 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 03:02:32.411061 1119263 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 03:02:32.411144 1119263 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 03:02:32.411243 1119263 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 03:02:32.411349 1119263 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 03:02:32.411474 1119263 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 03:02:32.411592 1119263 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 03:02:32.411654 1119263 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 03:02:32.411755 1119263 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 03:02:32.411823 1119263 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 03:02:32.411900 1119263 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 03:02:32.411957 1119263 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 03:02:32.412019 1119263 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 03:02:32.412077 1119263 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 03:02:32.412166 1119263 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 03:02:32.412460 1119263 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 03:02:32.415088 1119263 out.go:235]   - Booting up control plane ...
	I0127 03:02:32.415215 1119263 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 03:02:32.415349 1119263 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 03:02:32.415444 1119263 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 03:02:32.415597 1119263 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 03:02:32.415722 1119263 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 03:02:32.415772 1119263 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 03:02:32.415934 1119263 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 03:02:32.416041 1119263 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 03:02:32.416113 1119263 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001709036s
	I0127 03:02:32.416228 1119263 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 03:02:32.416326 1119263 kubeadm.go:310] [api-check] The API server is healthy after 6.003070171s
	I0127 03:02:32.416466 1119263 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 03:02:32.416619 1119263 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 03:02:32.416691 1119263 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 03:02:32.416890 1119263 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-264552 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 03:02:32.416990 1119263 kubeadm.go:310] [bootstrap-token] Using token: glfh41.djplehex31d2nmyn
	I0127 03:02:32.418322 1119263 out.go:235]   - Configuring RBAC rules ...
	I0127 03:02:32.418468 1119263 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 03:02:32.418553 1119263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 03:02:32.418749 1119263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 03:02:32.418932 1119263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 03:02:32.419089 1119263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 03:02:32.419214 1119263 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 03:02:32.419378 1119263 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 03:02:32.419436 1119263 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 03:02:32.419498 1119263 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 03:02:32.419505 1119263 kubeadm.go:310] 
	I0127 03:02:32.419581 1119263 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 03:02:32.419587 1119263 kubeadm.go:310] 
	I0127 03:02:32.419686 1119263 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 03:02:32.419696 1119263 kubeadm.go:310] 
	I0127 03:02:32.419729 1119263 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 03:02:32.419809 1119263 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 03:02:32.419880 1119263 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 03:02:32.419891 1119263 kubeadm.go:310] 
	I0127 03:02:32.419987 1119263 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 03:02:32.419998 1119263 kubeadm.go:310] 
	I0127 03:02:32.420067 1119263 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 03:02:32.420078 1119263 kubeadm.go:310] 
	I0127 03:02:32.420143 1119263 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 03:02:32.420236 1119263 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 03:02:32.420319 1119263 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 03:02:32.420330 1119263 kubeadm.go:310] 
	I0127 03:02:32.420421 1119263 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 03:02:32.420508 1119263 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 03:02:32.420519 1119263 kubeadm.go:310] 
	I0127 03:02:32.420616 1119263 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token glfh41.djplehex31d2nmyn \
	I0127 03:02:32.420750 1119263 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0bba8d4e4b3afb129d2d18e2e045cd48b3419c300ae73ce15b73c31a6c21b1ba \
	I0127 03:02:32.420781 1119263 kubeadm.go:310] 	--control-plane 
	I0127 03:02:32.420790 1119263 kubeadm.go:310] 
	I0127 03:02:32.420891 1119263 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 03:02:32.420902 1119263 kubeadm.go:310] 
	I0127 03:02:32.421036 1119263 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token glfh41.djplehex31d2nmyn \
	I0127 03:02:32.421192 1119263 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0bba8d4e4b3afb129d2d18e2e045cd48b3419c300ae73ce15b73c31a6c21b1ba 
	I0127 03:02:32.421210 1119263 cni.go:84] Creating CNI manager for ""
	I0127 03:02:32.421220 1119263 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 03:02:32.422542 1119263 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 03:02:30.820769 1121411 api_server.go:279] https://192.168.50.51:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 03:02:30.820809 1121411 api_server.go:103] status: https://192.168.50.51:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 03:02:30.820827 1121411 api_server.go:253] Checking apiserver healthz at https://192.168.50.51:8443/healthz ...
	I0127 03:02:30.840404 1121411 api_server.go:279] https://192.168.50.51:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 03:02:30.840436 1121411 api_server.go:103] status: https://192.168.50.51:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 03:02:31.203948 1121411 api_server.go:253] Checking apiserver healthz at https://192.168.50.51:8443/healthz ...
	I0127 03:02:31.209795 1121411 api_server.go:279] https://192.168.50.51:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 03:02:31.209820 1121411 api_server.go:103] status: https://192.168.50.51:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 03:02:31.703217 1121411 api_server.go:253] Checking apiserver healthz at https://192.168.50.51:8443/healthz ...
	I0127 03:02:31.724822 1121411 api_server.go:279] https://192.168.50.51:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 03:02:31.724862 1121411 api_server.go:103] status: https://192.168.50.51:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 03:02:32.203446 1121411 api_server.go:253] Checking apiserver healthz at https://192.168.50.51:8443/healthz ...
	I0127 03:02:32.210068 1121411 api_server.go:279] https://192.168.50.51:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 03:02:32.210100 1121411 api_server.go:103] status: https://192.168.50.51:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 03:02:32.703717 1121411 api_server.go:253] Checking apiserver healthz at https://192.168.50.51:8443/healthz ...
	I0127 03:02:32.709016 1121411 api_server.go:279] https://192.168.50.51:8443/healthz returned 200:
	ok
	I0127 03:02:32.719003 1121411 api_server.go:141] control plane version: v1.32.1
	I0127 03:02:32.719041 1121411 api_server.go:131] duration metric: took 5.016063652s to wait for apiserver health ...
	I0127 03:02:32.719055 1121411 cni.go:84] Creating CNI manager for ""
	I0127 03:02:32.719065 1121411 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 03:02:32.721101 1121411 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 03:02:32.722433 1121411 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 03:02:32.734857 1121411 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 03:02:32.761120 1121411 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 03:02:32.778500 1121411 system_pods.go:59] 9 kube-system pods found
	I0127 03:02:32.778547 1121411 system_pods.go:61] "coredns-668d6bf9bc-dscrm" [2869a26b-4522-43cd-8417-abc17b77dc7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 03:02:32.778558 1121411 system_pods.go:61] "coredns-668d6bf9bc-rcdv8" [7697dd25-c99a-4413-a242-54cca1d1e5e9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 03:02:32.778571 1121411 system_pods.go:61] "etcd-newest-cni-642127" [816ba553-68cb-4496-8dba-7839e9799916] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 03:02:32.778583 1121411 system_pods.go:61] "kube-apiserver-newest-cni-642127" [69c55a7c-148b-40ff-86ec-739c5f668a11] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 03:02:32.778596 1121411 system_pods.go:61] "kube-controller-manager-newest-cni-642127" [1b6d1085-c4fe-43f2-a9ab-320adeb6cd38] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 03:02:32.778608 1121411 system_pods.go:61] "kube-proxy-5q7mp" [1efd4424-3475-45e1-b80b-c941de90e34d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0127 03:02:32.778620 1121411 system_pods.go:61] "kube-scheduler-newest-cni-642127" [f1c81d74-2818-4093-9e69-19359cc3ff50] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 03:02:32.778631 1121411 system_pods.go:61] "metrics-server-f79f97bbb-47hqq" [7f6ccb13-e73f-4514-a639-e1297b545cf6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 03:02:32.778642 1121411 system_pods.go:61] "storage-provisioner" [ee06d1e8-0ae7-42c7-9c5b-d19fcfb83f40] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 03:02:32.778653 1121411 system_pods.go:74] duration metric: took 17.501517ms to wait for pod list to return data ...
	I0127 03:02:32.778667 1121411 node_conditions.go:102] verifying NodePressure condition ...
	I0127 03:02:32.783164 1121411 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 03:02:32.783201 1121411 node_conditions.go:123] node cpu capacity is 2
	I0127 03:02:32.783216 1121411 node_conditions.go:105] duration metric: took 4.539816ms to run NodePressure ...
	I0127 03:02:32.783239 1121411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 03:02:33.135340 1121411 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 03:02:33.148690 1121411 ops.go:34] apiserver oom_adj: -16
	I0127 03:02:33.148723 1121411 kubeadm.go:597] duration metric: took 8.771274475s to restartPrimaryControlPlane
	I0127 03:02:33.148739 1121411 kubeadm.go:394] duration metric: took 8.851928105s to StartCluster
	I0127 03:02:33.148766 1121411 settings.go:142] acquiring lock: {Name:mkfac79776d8549aa482287d1af528efdec15d32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:33.148862 1121411 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20316-1057178/kubeconfig
	I0127 03:02:33.150733 1121411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-1057178/kubeconfig: {Name:mke4bd9fc891569e5d6830fdf173fa5043f6c0a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:33.150984 1121411 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 03:02:33.151079 1121411 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 03:02:33.151202 1121411 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-642127"
	I0127 03:02:33.151222 1121411 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-642127"
	W0127 03:02:33.151238 1121411 addons.go:247] addon storage-provisioner should already be in state true
	I0127 03:02:33.151257 1121411 addons.go:69] Setting metrics-server=true in profile "newest-cni-642127"
	I0127 03:02:33.151258 1121411 addons.go:69] Setting default-storageclass=true in profile "newest-cni-642127"
	I0127 03:02:33.151284 1121411 addons.go:238] Setting addon metrics-server=true in "newest-cni-642127"
	I0127 03:02:33.151272 1121411 addons.go:69] Setting dashboard=true in profile "newest-cni-642127"
	W0127 03:02:33.151294 1121411 addons.go:247] addon metrics-server should already be in state true
	I0127 03:02:33.151294 1121411 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-642127"
	I0127 03:02:33.151315 1121411 addons.go:238] Setting addon dashboard=true in "newest-cni-642127"
	I0127 03:02:33.151313 1121411 config.go:182] Loaded profile config "newest-cni-642127": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	W0127 03:02:33.151325 1121411 addons.go:247] addon dashboard should already be in state true
	I0127 03:02:33.151330 1121411 host.go:66] Checking if "newest-cni-642127" exists ...
	I0127 03:02:33.151355 1121411 host.go:66] Checking if "newest-cni-642127" exists ...
	I0127 03:02:33.151285 1121411 host.go:66] Checking if "newest-cni-642127" exists ...
	I0127 03:02:33.151717 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:33.151747 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:33.151754 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:33.151760 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:33.151789 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:33.151793 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:33.151825 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:33.151865 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:33.152612 1121411 out.go:177] * Verifying Kubernetes components...
	I0127 03:02:33.154050 1121411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:02:33.169429 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33289
	I0127 03:02:33.169982 1121411 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:33.170451 1121411 main.go:141] libmachine: Using API Version  1
	I0127 03:02:33.170472 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:33.170815 1121411 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:33.171371 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36587
	I0127 03:02:33.171487 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:33.171528 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:33.171747 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39453
	I0127 03:02:33.171942 1121411 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:33.172289 1121411 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:33.172471 1121411 main.go:141] libmachine: Using API Version  1
	I0127 03:02:33.172498 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:33.172746 1121411 main.go:141] libmachine: Using API Version  1
	I0127 03:02:33.172766 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:33.172908 1121411 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:33.174172 1121411 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:33.174237 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetState
	I0127 03:02:33.175157 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45067
	I0127 03:02:33.175572 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:33.175616 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:33.175822 1121411 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:33.177792 1121411 addons.go:238] Setting addon default-storageclass=true in "newest-cni-642127"
	W0127 03:02:33.177817 1121411 addons.go:247] addon default-storageclass should already be in state true
	I0127 03:02:33.177848 1121411 host.go:66] Checking if "newest-cni-642127" exists ...
	I0127 03:02:33.178206 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:33.178256 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:33.178862 1121411 main.go:141] libmachine: Using API Version  1
	I0127 03:02:33.178892 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:33.179421 1121411 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:33.192581 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38383
	I0127 03:02:33.193097 1121411 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:33.193643 1121411 main.go:141] libmachine: Using API Version  1
	I0127 03:02:33.193668 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:33.194026 1121411 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:33.194248 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetState
	I0127 03:02:33.197497 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:33.199029 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36881
	I0127 03:02:33.199688 1121411 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:33.199789 1121411 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 03:02:33.200189 1121411 main.go:141] libmachine: Using API Version  1
	I0127 03:02:33.200217 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:33.200630 1121411 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:33.200826 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetState
	I0127 03:02:33.201177 1121411 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:02:33.201196 1121411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 03:02:33.201215 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:33.201773 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:33.201821 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:33.203099 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:33.204646 1121411 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 03:02:33.205709 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:33.206717 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:33.206782 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:33.207074 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
	I0127 03:02:33.207272 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:33.207453 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
	I0127 03:02:33.207613 1121411 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa Username:docker}
	I0127 03:02:33.208044 1121411 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 03:02:33.209101 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 03:02:33.209120 1121411 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 03:02:33.209140 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:33.212709 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:33.213133 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:33.213153 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:33.213451 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
	I0127 03:02:33.213632 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:33.213734 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
	I0127 03:02:33.213819 1121411 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa Username:docker}
	I0127 03:02:33.219861 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43957
	I0127 03:02:33.220403 1121411 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:33.220991 1121411 main.go:141] libmachine: Using API Version  1
	I0127 03:02:33.221024 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:33.221408 1121411 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:33.222196 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:33.222254 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:33.223731 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42359
	I0127 03:02:33.224051 1121411 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:33.224552 1121411 main.go:141] libmachine: Using API Version  1
	I0127 03:02:33.224573 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:33.224816 1121411 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:33.225077 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetState
	I0127 03:02:33.227906 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:33.229635 1121411 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 03:02:32.423722 1119263 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 03:02:32.436568 1119263 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 03:02:32.461950 1119263 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 03:02:32.462072 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:32.462109 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-264552 minikube.k8s.io/updated_at=2025_01_27T03_02_32_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95 minikube.k8s.io/name=embed-certs-264552 minikube.k8s.io/primary=true
	I0127 03:02:32.477721 1119263 ops.go:34] apiserver oom_adj: -16
	I0127 03:02:32.739220 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:33.239786 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:33.740039 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:34.239291 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:34.740312 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:31.600099 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:33.600177 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:33.231071 1121411 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 03:02:33.231090 1121411 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 03:02:33.231112 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:33.233979 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:33.234359 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:33.234412 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:33.234633 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
	I0127 03:02:33.234777 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:33.234927 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
	I0127 03:02:33.235147 1121411 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa Username:docker}
	I0127 03:02:33.243914 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40181
	I0127 03:02:33.244332 1121411 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:33.244875 1121411 main.go:141] libmachine: Using API Version  1
	I0127 03:02:33.244889 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:33.245272 1121411 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:33.245443 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetState
	I0127 03:02:33.247204 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:33.247418 1121411 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 03:02:33.247429 1121411 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 03:02:33.247455 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:33.250553 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:33.251030 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:33.251045 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:33.251208 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
	I0127 03:02:33.251359 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:33.251505 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
	I0127 03:02:33.251611 1121411 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa Username:docker}
	I0127 03:02:33.375505 1121411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 03:02:33.394405 1121411 api_server.go:52] waiting for apiserver process to appear ...
	I0127 03:02:33.394507 1121411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:33.410947 1121411 api_server.go:72] duration metric: took 259.928237ms to wait for apiserver process to appear ...
	I0127 03:02:33.410983 1121411 api_server.go:88] waiting for apiserver healthz status ...
	I0127 03:02:33.411005 1121411 api_server.go:253] Checking apiserver healthz at https://192.168.50.51:8443/healthz ...
	I0127 03:02:33.416758 1121411 api_server.go:279] https://192.168.50.51:8443/healthz returned 200:
	ok
	I0127 03:02:33.418367 1121411 api_server.go:141] control plane version: v1.32.1
	I0127 03:02:33.418395 1121411 api_server.go:131] duration metric: took 7.402525ms to wait for apiserver health ...
	I0127 03:02:33.418407 1121411 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 03:02:33.424893 1121411 system_pods.go:59] 9 kube-system pods found
	I0127 03:02:33.424921 1121411 system_pods.go:61] "coredns-668d6bf9bc-dscrm" [2869a26b-4522-43cd-8417-abc17b77dc7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 03:02:33.424928 1121411 system_pods.go:61] "coredns-668d6bf9bc-rcdv8" [7697dd25-c99a-4413-a242-54cca1d1e5e9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 03:02:33.424936 1121411 system_pods.go:61] "etcd-newest-cni-642127" [816ba553-68cb-4496-8dba-7839e9799916] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 03:02:33.424965 1121411 system_pods.go:61] "kube-apiserver-newest-cni-642127" [69c55a7c-148b-40ff-86ec-739c5f668a11] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 03:02:33.424984 1121411 system_pods.go:61] "kube-controller-manager-newest-cni-642127" [1b6d1085-c4fe-43f2-a9ab-320adeb6cd38] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 03:02:33.424992 1121411 system_pods.go:61] "kube-proxy-5q7mp" [1efd4424-3475-45e1-b80b-c941de90e34d] Running
	I0127 03:02:33.424997 1121411 system_pods.go:61] "kube-scheduler-newest-cni-642127" [f1c81d74-2818-4093-9e69-19359cc3ff50] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 03:02:33.425005 1121411 system_pods.go:61] "metrics-server-f79f97bbb-47hqq" [7f6ccb13-e73f-4514-a639-e1297b545cf6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 03:02:33.425009 1121411 system_pods.go:61] "storage-provisioner" [ee06d1e8-0ae7-42c7-9c5b-d19fcfb83f40] Running
	I0127 03:02:33.425017 1121411 system_pods.go:74] duration metric: took 6.604015ms to wait for pod list to return data ...
	I0127 03:02:33.425027 1121411 default_sa.go:34] waiting for default service account to be created ...
	I0127 03:02:33.427992 1121411 default_sa.go:45] found service account: "default"
	I0127 03:02:33.428016 1121411 default_sa.go:55] duration metric: took 2.981475ms for default service account to be created ...
	I0127 03:02:33.428030 1121411 kubeadm.go:582] duration metric: took 277.019922ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 03:02:33.428053 1121411 node_conditions.go:102] verifying NodePressure condition ...
	I0127 03:02:33.431283 1121411 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 03:02:33.431303 1121411 node_conditions.go:123] node cpu capacity is 2
	I0127 03:02:33.431313 1121411 node_conditions.go:105] duration metric: took 3.254985ms to run NodePressure ...
	I0127 03:02:33.431324 1121411 start.go:241] waiting for startup goroutines ...
	I0127 03:02:33.462238 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 03:02:33.462261 1121411 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 03:02:33.476129 1121411 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 03:02:33.476162 1121411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 03:02:33.488754 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 03:02:33.488789 1121411 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 03:02:33.497073 1121411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 03:02:33.519522 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 03:02:33.519557 1121411 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 03:02:33.551868 1121411 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 03:02:33.551905 1121411 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 03:02:33.565343 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 03:02:33.565374 1121411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 03:02:33.600695 1121411 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 03:02:33.600720 1121411 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 03:02:33.602150 1121411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:02:33.632660 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 03:02:33.632694 1121411 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 03:02:33.652690 1121411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 03:02:33.705754 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 03:02:33.705786 1121411 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 03:02:33.793208 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 03:02:33.793261 1121411 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 03:02:33.881849 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 03:02:33.881884 1121411 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 03:02:33.979510 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 03:02:33.979542 1121411 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 03:02:34.040605 1121411 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:34.040637 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Close
	I0127 03:02:34.041032 1121411 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:34.041080 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | Closing plugin on server side
	I0127 03:02:34.041090 1121411 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:34.041113 1121411 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:34.041137 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Close
	I0127 03:02:34.041431 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | Closing plugin on server side
	I0127 03:02:34.041481 1121411 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:34.041493 1121411 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:34.058399 1121411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 03:02:34.104645 1121411 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:34.104666 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Close
	I0127 03:02:34.104999 1121411 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:34.105025 1121411 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:34.105046 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | Closing plugin on server side
	I0127 03:02:35.194812 1121411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.542086223s)
	I0127 03:02:35.194884 1121411 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:35.194899 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Close
	I0127 03:02:35.194665 1121411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.592471736s)
	I0127 03:02:35.194995 1121411 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:35.195010 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Close
	I0127 03:02:35.197298 1121411 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:35.197320 1121411 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:35.197331 1121411 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:35.197338 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Close
	I0127 03:02:35.197484 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | Closing plugin on server side
	I0127 03:02:35.197524 1121411 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:35.197543 1121411 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:35.197551 1121411 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:35.197563 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Close
	I0127 03:02:35.197565 1121411 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:35.197575 1121411 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:35.197591 1121411 addons.go:479] Verifying addon metrics-server=true in "newest-cni-642127"
	I0127 03:02:35.197806 1121411 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:35.197821 1121411 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:35.738350 1121411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.679893698s)
	I0127 03:02:35.738414 1121411 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:35.738431 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Close
	I0127 03:02:35.738859 1121411 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:35.738880 1121411 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:35.738897 1121411 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:35.738906 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Close
	I0127 03:02:35.739194 1121411 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:35.739211 1121411 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:35.739256 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | Closing plugin on server side
	I0127 03:02:35.740543 1121411 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-642127 addons enable metrics-server
	
	I0127 03:02:35.742112 1121411 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	I0127 03:02:35.743312 1121411 addons.go:514] duration metric: took 2.592255359s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
	I0127 03:02:35.743356 1121411 start.go:246] waiting for cluster config update ...
	I0127 03:02:35.743372 1121411 start.go:255] writing updated cluster config ...
	I0127 03:02:35.743643 1121411 ssh_runner.go:195] Run: rm -f paused
	I0127 03:02:35.802583 1121411 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 03:02:35.804271 1121411 out.go:177] * Done! kubectl is now configured to use "newest-cni-642127" cluster and "default" namespace by default
	I0127 03:02:35.240046 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:35.739577 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:36.239666 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:36.396540 1119263 kubeadm.go:1113] duration metric: took 3.934543669s to wait for elevateKubeSystemPrivileges
	I0127 03:02:36.396587 1119263 kubeadm.go:394] duration metric: took 4m36.765414047s to StartCluster
	I0127 03:02:36.396612 1119263 settings.go:142] acquiring lock: {Name:mkfac79776d8549aa482287d1af528efdec15d32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:36.396700 1119263 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20316-1057178/kubeconfig
	I0127 03:02:36.399283 1119263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-1057178/kubeconfig: {Name:mke4bd9fc891569e5d6830fdf173fa5043f6c0a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:36.399607 1119263 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 03:02:36.399896 1119263 config.go:182] Loaded profile config "embed-certs-264552": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 03:02:36.399967 1119263 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 03:02:36.400065 1119263 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-264552"
	I0127 03:02:36.400097 1119263 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-264552"
	W0127 03:02:36.400111 1119263 addons.go:247] addon storage-provisioner should already be in state true
	I0127 03:02:36.400147 1119263 host.go:66] Checking if "embed-certs-264552" exists ...
	I0127 03:02:36.400364 1119263 addons.go:69] Setting default-storageclass=true in profile "embed-certs-264552"
	I0127 03:02:36.400393 1119263 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-264552"
	I0127 03:02:36.400697 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:36.400746 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:36.400860 1119263 addons.go:69] Setting dashboard=true in profile "embed-certs-264552"
	I0127 03:02:36.400889 1119263 addons.go:238] Setting addon dashboard=true in "embed-certs-264552"
	I0127 03:02:36.400891 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	W0127 03:02:36.400899 1119263 addons.go:247] addon dashboard should already be in state true
	I0127 03:02:36.400934 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:36.400962 1119263 host.go:66] Checking if "embed-certs-264552" exists ...
	I0127 03:02:36.401007 1119263 addons.go:69] Setting metrics-server=true in profile "embed-certs-264552"
	I0127 03:02:36.401034 1119263 addons.go:238] Setting addon metrics-server=true in "embed-certs-264552"
	W0127 03:02:36.401044 1119263 addons.go:247] addon metrics-server should already be in state true
	I0127 03:02:36.401078 1119263 host.go:66] Checking if "embed-certs-264552" exists ...
	I0127 03:02:36.401508 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:36.401557 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:36.401777 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:36.401824 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:36.401991 1119263 out.go:177] * Verifying Kubernetes components...
	I0127 03:02:36.403910 1119263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:02:36.422683 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32813
	I0127 03:02:36.423177 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:36.423824 1119263 main.go:141] libmachine: Using API Version  1
	I0127 03:02:36.423851 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:36.424298 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:36.424516 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetState
	I0127 03:02:36.425635 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45749
	I0127 03:02:36.425994 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40999
	I0127 03:02:36.426142 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:36.426423 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:36.426703 1119263 main.go:141] libmachine: Using API Version  1
	I0127 03:02:36.426729 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:36.427088 1119263 main.go:141] libmachine: Using API Version  1
	I0127 03:02:36.427111 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:36.427288 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:36.427869 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:36.427910 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:36.429980 1119263 addons.go:238] Setting addon default-storageclass=true in "embed-certs-264552"
	W0127 03:02:36.429999 1119263 addons.go:247] addon default-storageclass should already be in state true
	I0127 03:02:36.430029 1119263 host.go:66] Checking if "embed-certs-264552" exists ...
	I0127 03:02:36.430409 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:36.430443 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:36.430902 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:36.431582 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:36.431620 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:36.449634 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42081
	I0127 03:02:36.450301 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:36.451062 1119263 main.go:141] libmachine: Using API Version  1
	I0127 03:02:36.451085 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:36.451525 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:36.452191 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:36.452239 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:36.455086 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40299
	I0127 03:02:36.455301 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41491
	I0127 03:02:36.455535 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:36.456246 1119263 main.go:141] libmachine: Using API Version  1
	I0127 03:02:36.456264 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:36.456672 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:36.456898 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetState
	I0127 03:02:36.458545 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37933
	I0127 03:02:36.459300 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:36.459602 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .DriverName
	I0127 03:02:36.460164 1119263 main.go:141] libmachine: Using API Version  1
	I0127 03:02:36.460195 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:36.461041 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:36.461379 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:36.461672 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:36.461676 1119263 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 03:02:36.461723 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:36.461915 1119263 main.go:141] libmachine: Using API Version  1
	I0127 03:02:36.461930 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:36.462520 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:36.462923 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetState
	I0127 03:02:36.465082 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .DriverName
	I0127 03:02:36.465338 1119263 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 03:02:36.466448 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 03:02:36.466474 1119263 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 03:02:36.466495 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHHostname
	I0127 03:02:36.466570 1119263 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 03:02:36.468155 1119263 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:02:36.468187 1119263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 03:02:36.468209 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHHostname
	I0127 03:02:36.470910 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 03:02:36.471779 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:7a:0a", ip: ""} in network mk-embed-certs-264552: {Iface:virbr1 ExpiryTime:2025-01-27 03:57:49 +0000 UTC Type:0 Mac:52:54:00:89:7a:0a Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:embed-certs-264552 Clientid:01:52:54:00:89:7a:0a}
	I0127 03:02:36.471818 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined IP address 192.168.39.145 and MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 03:02:36.472039 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHPort
	I0127 03:02:36.472253 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHKeyPath
	I0127 03:02:36.472399 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHUsername
	I0127 03:02:36.472572 1119263 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/embed-certs-264552/id_rsa Username:docker}
	I0127 03:02:36.475423 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34703
	I0127 03:02:36.476153 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:36.476804 1119263 main.go:141] libmachine: Using API Version  1
	I0127 03:02:36.476823 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:36.477245 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:36.477505 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetState
	I0127 03:02:36.479472 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .DriverName
	I0127 03:02:36.481333 1119263 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 03:02:36.481739 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40689
	I0127 03:02:36.482275 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:36.482837 1119263 main.go:141] libmachine: Using API Version  1
	I0127 03:02:36.482854 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:36.482868 1119263 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 03:02:36.482887 1119263 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 03:02:36.482910 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHHostname
	I0127 03:02:36.483231 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:36.483493 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetState
	I0127 03:02:36.486181 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .DriverName
	I0127 03:02:36.486454 1119263 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 03:02:36.486475 1119263 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 03:02:36.486493 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHHostname
	I0127 03:02:36.488039 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 03:02:36.488500 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:7a:0a", ip: ""} in network mk-embed-certs-264552: {Iface:virbr1 ExpiryTime:2025-01-27 03:57:49 +0000 UTC Type:0 Mac:52:54:00:89:7a:0a Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:embed-certs-264552 Clientid:01:52:54:00:89:7a:0a}
	I0127 03:02:36.488532 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined IP address 192.168.39.145 and MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 03:02:36.488756 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHPort
	I0127 03:02:36.488966 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHKeyPath
	I0127 03:02:36.489130 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHUsername
	I0127 03:02:36.489289 1119263 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/embed-certs-264552/id_rsa Username:docker}
	I0127 03:02:36.489612 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 03:02:36.489866 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:7a:0a", ip: ""} in network mk-embed-certs-264552: {Iface:virbr1 ExpiryTime:2025-01-27 03:57:49 +0000 UTC Type:0 Mac:52:54:00:89:7a:0a Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:embed-certs-264552 Clientid:01:52:54:00:89:7a:0a}
	I0127 03:02:36.489889 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined IP address 192.168.39.145 and MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 03:02:36.490026 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHPort
	I0127 03:02:36.490149 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHKeyPath
	I0127 03:02:36.490261 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHUsername
	I0127 03:02:36.490344 1119263 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/embed-certs-264552/id_rsa Username:docker}
	I0127 03:02:36.494271 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 03:02:36.494636 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:7a:0a", ip: ""} in network mk-embed-certs-264552: {Iface:virbr1 ExpiryTime:2025-01-27 03:57:49 +0000 UTC Type:0 Mac:52:54:00:89:7a:0a Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:embed-certs-264552 Clientid:01:52:54:00:89:7a:0a}
	I0127 03:02:36.494659 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined IP address 192.168.39.145 and MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 03:02:36.495050 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHPort
	I0127 03:02:36.495292 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHKeyPath
	I0127 03:02:36.495511 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHUsername
	I0127 03:02:36.495682 1119263 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/embed-certs-264552/id_rsa Username:docker}
	I0127 03:02:36.737773 1119263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 03:02:36.826450 1119263 node_ready.go:35] waiting up to 6m0s for node "embed-certs-264552" to be "Ready" ...
	I0127 03:02:36.857580 1119263 node_ready.go:49] node "embed-certs-264552" has status "Ready":"True"
	I0127 03:02:36.857609 1119263 node_ready.go:38] duration metric: took 31.04815ms for node "embed-certs-264552" to be "Ready" ...
	I0127 03:02:36.857623 1119263 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:02:36.873458 1119263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 03:02:36.877540 1119263 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-mbkl2" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:36.957829 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 03:02:36.957866 1119263 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 03:02:37.005603 1119263 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 03:02:37.005635 1119263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 03:02:37.006377 1119263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:02:37.031565 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 03:02:37.031587 1119263 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 03:02:37.100245 1119263 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 03:02:37.100282 1119263 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 03:02:37.175281 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 03:02:37.175309 1119263 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 03:02:37.221791 1119263 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 03:02:37.221825 1119263 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 03:02:37.308268 1119263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 03:02:37.423632 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 03:02:37.423660 1119263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 03:02:37.588554 1119263 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:37.588586 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
	I0127 03:02:37.589111 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | Closing plugin on server side
	I0127 03:02:37.589130 1119263 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:37.589147 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:37.589162 1119263 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:37.589176 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
	I0127 03:02:37.589462 1119263 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:37.589483 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:37.634711 1119263 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:37.634744 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
	I0127 03:02:37.635023 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | Closing plugin on server side
	I0127 03:02:37.635065 1119263 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:37.635073 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:37.649206 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 03:02:37.649231 1119263 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 03:02:37.780671 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 03:02:37.780709 1119263 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 03:02:37.963118 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 03:02:37.963151 1119263 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 03:02:38.051717 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 03:02:38.051755 1119263 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 03:02:38.102698 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 03:02:38.102726 1119263 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 03:02:38.177754 1119263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 03:02:38.867496 1119263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.861076308s)
	I0127 03:02:38.867579 1119263 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:38.867594 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
	I0127 03:02:38.868010 1119263 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:38.868037 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:38.868054 1119263 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:38.868067 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
	I0127 03:02:38.868377 1119263 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:38.868397 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:38.923746 1119263 pod_ready.go:103] pod "coredns-668d6bf9bc-mbkl2" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:38.991645 1119263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.683326945s)
	I0127 03:02:38.991708 1119263 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:38.991728 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
	I0127 03:02:38.992116 1119263 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:38.992137 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:38.992146 1119263 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:38.992153 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
	I0127 03:02:38.992566 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | Closing plugin on server side
	I0127 03:02:38.992598 1119263 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:38.992624 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:38.992643 1119263 addons.go:479] Verifying addon metrics-server=true in "embed-certs-264552"
	I0127 03:02:39.990731 1119263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.812917797s)
	I0127 03:02:39.990802 1119263 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:39.990818 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
	I0127 03:02:39.991192 1119263 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:39.991223 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:39.991235 1119263 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:39.991246 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
	I0127 03:02:39.991554 1119263 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:39.991575 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:39.993095 1119263 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-264552 addons enable metrics-server
	
	I0127 03:02:39.994564 1119263 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0127 03:02:35.602346 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:38.100810 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:39.995898 1119263 addons.go:514] duration metric: took 3.595931069s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0127 03:02:40.888544 1119263 pod_ready.go:93] pod "coredns-668d6bf9bc-mbkl2" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:40.888568 1119263 pod_ready.go:82] duration metric: took 4.01099998s for pod "coredns-668d6bf9bc-mbkl2" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:40.888579 1119263 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-n5wn4" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:40.895910 1119263 pod_ready.go:93] pod "coredns-668d6bf9bc-n5wn4" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:40.895941 1119263 pod_ready.go:82] duration metric: took 7.354168ms for pod "coredns-668d6bf9bc-n5wn4" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:40.895955 1119263 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:40.900393 1119263 pod_ready.go:93] pod "etcd-embed-certs-264552" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:40.900415 1119263 pod_ready.go:82] duration metric: took 4.45357ms for pod "etcd-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:40.900426 1119263 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:40.908664 1119263 pod_ready.go:93] pod "kube-apiserver-embed-certs-264552" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:40.908686 1119263 pod_ready.go:82] duration metric: took 8.251039ms for pod "kube-apiserver-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:40.908697 1119263 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:42.917072 1119263 pod_ready.go:103] pod "kube-controller-manager-embed-certs-264552" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:44.927051 1119263 pod_ready.go:93] pod "kube-controller-manager-embed-certs-264552" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:44.927083 1119263 pod_ready.go:82] duration metric: took 4.01837775s for pod "kube-controller-manager-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:44.927096 1119263 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kwqqr" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:44.939727 1119263 pod_ready.go:93] pod "kube-proxy-kwqqr" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:44.939759 1119263 pod_ready.go:82] duration metric: took 12.654042ms for pod "kube-proxy-kwqqr" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:44.939772 1119263 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:44.966136 1119263 pod_ready.go:93] pod "kube-scheduler-embed-certs-264552" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:44.966165 1119263 pod_ready.go:82] duration metric: took 26.38251ms for pod "kube-scheduler-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:44.966178 1119263 pod_ready.go:39] duration metric: took 8.108541494s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:02:44.966199 1119263 api_server.go:52] waiting for apiserver process to appear ...
	I0127 03:02:44.966260 1119263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:40.598596 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:41.593185 1119269 pod_ready.go:82] duration metric: took 4m0.0010842s for pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace to be "Ready" ...
	E0127 03:02:41.593221 1119269 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 03:02:41.593251 1119269 pod_ready.go:39] duration metric: took 4m13.044846596s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:02:41.593292 1119269 kubeadm.go:597] duration metric: took 4m21.461431723s to restartPrimaryControlPlane
	W0127 03:02:41.593372 1119269 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 03:02:41.593408 1119269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 03:02:43.620030 1119269 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.026590178s)
	I0127 03:02:43.620115 1119269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 03:02:43.639142 1119269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 03:02:43.651292 1119269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 03:02:43.661667 1119269 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 03:02:43.661687 1119269 kubeadm.go:157] found existing configuration files:
	
	I0127 03:02:43.661733 1119269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0127 03:02:43.672110 1119269 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 03:02:43.672165 1119269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 03:02:43.683718 1119269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0127 03:02:43.693914 1119269 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 03:02:43.693983 1119269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 03:02:43.704250 1119269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0127 03:02:43.714202 1119269 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 03:02:43.714283 1119269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 03:02:43.724775 1119269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0127 03:02:43.734789 1119269 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 03:02:43.734857 1119269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 03:02:43.746079 1119269 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 03:02:43.925921 1119269 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 03:02:44.991380 1119263 api_server.go:72] duration metric: took 8.59171979s to wait for apiserver process to appear ...
	I0127 03:02:44.991410 1119263 api_server.go:88] waiting for apiserver healthz status ...
	I0127 03:02:44.991439 1119263 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8443/healthz ...
	I0127 03:02:44.997033 1119263 api_server.go:279] https://192.168.39.145:8443/healthz returned 200:
	ok
	I0127 03:02:44.998283 1119263 api_server.go:141] control plane version: v1.32.1
	I0127 03:02:44.998310 1119263 api_server.go:131] duration metric: took 6.891198ms to wait for apiserver health ...
	I0127 03:02:44.998321 1119263 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 03:02:45.087014 1119263 system_pods.go:59] 9 kube-system pods found
	I0127 03:02:45.087059 1119263 system_pods.go:61] "coredns-668d6bf9bc-mbkl2" [29059a1e-4228-4fbc-bf18-0de800cbb47a] Running
	I0127 03:02:45.087067 1119263 system_pods.go:61] "coredns-668d6bf9bc-n5wn4" [416b5ae4-f786-4b1e-a699-d688b967a6f4] Running
	I0127 03:02:45.087073 1119263 system_pods.go:61] "etcd-embed-certs-264552" [b2389caf-28fb-42d8-9912-8c3829f8bfd6] Running
	I0127 03:02:45.087079 1119263 system_pods.go:61] "kube-apiserver-embed-certs-264552" [0150043f-38b8-4946-84f1-0c9c7aaf7328] Running
	I0127 03:02:45.087084 1119263 system_pods.go:61] "kube-controller-manager-embed-certs-264552" [940554f4-564d-4939-a09a-0ea61e36ff6c] Running
	I0127 03:02:45.087090 1119263 system_pods.go:61] "kube-proxy-kwqqr" [85b35a19-646d-43a8-b90f-c5a5b4a93393] Running
	I0127 03:02:45.087096 1119263 system_pods.go:61] "kube-scheduler-embed-certs-264552" [4a578d9d-f487-4839-a23d-1ec267612f0d] Running
	I0127 03:02:45.087106 1119263 system_pods.go:61] "metrics-server-f79f97bbb-6dg5x" [4b9cd5d7-1055-45ea-8ac9-1a91b9246c0e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 03:02:45.087114 1119263 system_pods.go:61] "storage-provisioner" [4e4e1f9a-505b-4ed2-ad81-5543176f645a] Running
	I0127 03:02:45.087123 1119263 system_pods.go:74] duration metric: took 88.795129ms to wait for pod list to return data ...
	I0127 03:02:45.087134 1119263 default_sa.go:34] waiting for default service account to be created ...
	I0127 03:02:45.282547 1119263 default_sa.go:45] found service account: "default"
	I0127 03:02:45.282578 1119263 default_sa.go:55] duration metric: took 195.436382ms for default service account to be created ...
	I0127 03:02:45.282589 1119263 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 03:02:45.486513 1119263 system_pods.go:87] 9 kube-system pods found
	I0127 03:02:52.671028 1119269 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 03:02:52.671099 1119269 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 03:02:52.671206 1119269 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 03:02:52.671380 1119269 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 03:02:52.671539 1119269 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 03:02:52.671639 1119269 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 03:02:52.673297 1119269 out.go:235]   - Generating certificates and keys ...
	I0127 03:02:52.673383 1119269 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 03:02:52.673474 1119269 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 03:02:52.673554 1119269 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 03:02:52.673609 1119269 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 03:02:52.673670 1119269 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 03:02:52.673716 1119269 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 03:02:52.673767 1119269 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 03:02:52.673816 1119269 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 03:02:52.673876 1119269 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 03:02:52.673954 1119269 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 03:02:52.673999 1119269 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 03:02:52.674047 1119269 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 03:02:52.674108 1119269 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 03:02:52.674187 1119269 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 03:02:52.674263 1119269 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 03:02:52.674321 1119269 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 03:02:52.674367 1119269 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 03:02:52.674447 1119269 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 03:02:52.674507 1119269 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 03:02:52.675997 1119269 out.go:235]   - Booting up control plane ...
	I0127 03:02:52.676130 1119269 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 03:02:52.676280 1119269 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 03:02:52.676377 1119269 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 03:02:52.676517 1119269 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 03:02:52.676652 1119269 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 03:02:52.676719 1119269 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 03:02:52.676922 1119269 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 03:02:52.677082 1119269 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 03:02:52.677173 1119269 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001864315s
	I0127 03:02:52.677287 1119269 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 03:02:52.677368 1119269 kubeadm.go:310] [api-check] The API server is healthy after 5.001344194s
	I0127 03:02:52.677511 1119269 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 03:02:52.677653 1119269 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 03:02:52.677715 1119269 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 03:02:52.677867 1119269 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-717075 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 03:02:52.677952 1119269 kubeadm.go:310] [bootstrap-token] Using token: dptef9.zgjhm0hnxmak7ndp
	I0127 03:02:52.679531 1119269 out.go:235]   - Configuring RBAC rules ...
	I0127 03:02:52.679681 1119269 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 03:02:52.679793 1119269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 03:02:52.680000 1119269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 03:02:52.680151 1119269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 03:02:52.680307 1119269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 03:02:52.680415 1119269 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 03:02:52.680548 1119269 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 03:02:52.680611 1119269 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 03:02:52.680680 1119269 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 03:02:52.680690 1119269 kubeadm.go:310] 
	I0127 03:02:52.680769 1119269 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 03:02:52.680779 1119269 kubeadm.go:310] 
	I0127 03:02:52.680875 1119269 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 03:02:52.680886 1119269 kubeadm.go:310] 
	I0127 03:02:52.680922 1119269 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 03:02:52.681024 1119269 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 03:02:52.681096 1119269 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 03:02:52.681106 1119269 kubeadm.go:310] 
	I0127 03:02:52.681192 1119269 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 03:02:52.681208 1119269 kubeadm.go:310] 
	I0127 03:02:52.681275 1119269 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 03:02:52.681289 1119269 kubeadm.go:310] 
	I0127 03:02:52.681363 1119269 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 03:02:52.681491 1119269 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 03:02:52.681562 1119269 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 03:02:52.681568 1119269 kubeadm.go:310] 
	I0127 03:02:52.681636 1119269 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 03:02:52.681749 1119269 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 03:02:52.681759 1119269 kubeadm.go:310] 
	I0127 03:02:52.681896 1119269 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token dptef9.zgjhm0hnxmak7ndp \
	I0127 03:02:52.682053 1119269 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0bba8d4e4b3afb129d2d18e2e045cd48b3419c300ae73ce15b73c31a6c21b1ba \
	I0127 03:02:52.682085 1119269 kubeadm.go:310] 	--control-plane 
	I0127 03:02:52.682091 1119269 kubeadm.go:310] 
	I0127 03:02:52.682242 1119269 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 03:02:52.682259 1119269 kubeadm.go:310] 
	I0127 03:02:52.682381 1119269 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token dptef9.zgjhm0hnxmak7ndp \
	I0127 03:02:52.682532 1119269 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0bba8d4e4b3afb129d2d18e2e045cd48b3419c300ae73ce15b73c31a6c21b1ba 
	I0127 03:02:52.682561 1119269 cni.go:84] Creating CNI manager for ""
	I0127 03:02:52.682574 1119269 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 03:02:52.684226 1119269 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 03:02:52.685352 1119269 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 03:02:52.697398 1119269 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 03:02:52.719046 1119269 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 03:02:52.719104 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:52.719145 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-717075 minikube.k8s.io/updated_at=2025_01_27T03_02_52_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95 minikube.k8s.io/name=default-k8s-diff-port-717075 minikube.k8s.io/primary=true
	I0127 03:02:52.761799 1119269 ops.go:34] apiserver oom_adj: -16
	I0127 03:02:52.952929 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:53.453841 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:53.953656 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:54.453137 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:54.953750 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:55.453823 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:55.953104 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:56.453840 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:56.953721 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:57.072043 1119269 kubeadm.go:1113] duration metric: took 4.352992678s to wait for elevateKubeSystemPrivileges
	I0127 03:02:57.072116 1119269 kubeadm.go:394] duration metric: took 4m37.021077009s to StartCluster
	I0127 03:02:57.072145 1119269 settings.go:142] acquiring lock: {Name:mkfac79776d8549aa482287d1af528efdec15d32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:57.072271 1119269 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20316-1057178/kubeconfig
	I0127 03:02:57.073904 1119269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-1057178/kubeconfig: {Name:mke4bd9fc891569e5d6830fdf173fa5043f6c0a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:57.074254 1119269 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.17 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 03:02:57.074373 1119269 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 03:02:57.074508 1119269 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-717075"
	I0127 03:02:57.074520 1119269 config.go:182] Loaded profile config "default-k8s-diff-port-717075": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 03:02:57.074535 1119269 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-717075"
	W0127 03:02:57.074544 1119269 addons.go:247] addon storage-provisioner should already be in state true
	I0127 03:02:57.074540 1119269 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-717075"
	I0127 03:02:57.074579 1119269 host.go:66] Checking if "default-k8s-diff-port-717075" exists ...
	I0127 03:02:57.074576 1119269 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-717075"
	I0127 03:02:57.074572 1119269 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-717075"
	I0127 03:02:57.074588 1119269 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-717075"
	I0127 03:02:57.074605 1119269 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-717075"
	I0127 03:02:57.074614 1119269 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-717075"
	W0127 03:02:57.074616 1119269 addons.go:247] addon dashboard should already be in state true
	W0127 03:02:57.074623 1119269 addons.go:247] addon metrics-server should already be in state true
	I0127 03:02:57.074653 1119269 host.go:66] Checking if "default-k8s-diff-port-717075" exists ...
	I0127 03:02:57.074659 1119269 host.go:66] Checking if "default-k8s-diff-port-717075" exists ...
	I0127 03:02:57.075056 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:57.075068 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:57.075068 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:57.075121 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:57.075123 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:57.075163 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:57.075267 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:57.075353 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:57.081008 1119269 out.go:177] * Verifying Kubernetes components...
	I0127 03:02:57.082885 1119269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:02:57.094206 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34785
	I0127 03:02:57.094931 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:57.095746 1119269 main.go:141] libmachine: Using API Version  1
	I0127 03:02:57.095766 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:57.095843 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37149
	I0127 03:02:57.095963 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40171
	I0127 03:02:57.096377 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:57.096485 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:57.096649 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:57.097010 1119269 main.go:141] libmachine: Using API Version  1
	I0127 03:02:57.097039 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:57.097172 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:57.097228 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:57.097627 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:57.097906 1119269 main.go:141] libmachine: Using API Version  1
	I0127 03:02:57.097919 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:57.098237 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:57.098286 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:57.098455 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40437
	I0127 03:02:57.098935 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:57.099556 1119269 main.go:141] libmachine: Using API Version  1
	I0127 03:02:57.099578 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:57.099797 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:57.100439 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:57.100480 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:57.100698 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:57.100896 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetState
	I0127 03:02:57.105155 1119269 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-717075"
	W0127 03:02:57.105188 1119269 addons.go:247] addon default-storageclass should already be in state true
	I0127 03:02:57.105221 1119269 host.go:66] Checking if "default-k8s-diff-port-717075" exists ...
	I0127 03:02:57.105609 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:57.105668 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:57.121375 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36179
	I0127 03:02:57.121658 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39597
	I0127 03:02:57.121901 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:57.122123 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:57.122486 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39513
	I0127 03:02:57.122504 1119269 main.go:141] libmachine: Using API Version  1
	I0127 03:02:57.122523 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:57.122758 1119269 main.go:141] libmachine: Using API Version  1
	I0127 03:02:57.122778 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:57.122813 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35417
	I0127 03:02:57.122851 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:57.122923 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:57.123171 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetState
	I0127 03:02:57.123241 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:57.123868 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:57.123978 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetState
	I0127 03:02:57.123990 1119269 main.go:141] libmachine: Using API Version  1
	I0127 03:02:57.124007 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:57.124368 1119269 main.go:141] libmachine: Using API Version  1
	I0127 03:02:57.124387 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:57.124452 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:57.124681 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetState
	I0127 03:02:57.124733 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:57.125300 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:57.125347 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:57.126534 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .DriverName
	I0127 03:02:57.127123 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .DriverName
	I0127 03:02:57.127415 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .DriverName
	I0127 03:02:57.128921 1119269 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 03:02:57.128930 1119269 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 03:02:57.128931 1119269 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 03:02:57.130374 1119269 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 03:02:57.130393 1119269 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:02:57.130411 1119269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 03:02:57.130431 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHHostname
	I0127 03:02:57.130395 1119269 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 03:02:57.130396 1119269 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 03:02:57.130621 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHHostname
	I0127 03:02:57.132516 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 03:02:57.132532 1119269 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 03:02:57.132547 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHHostname
	I0127 03:02:57.135860 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 03:02:57.135912 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 03:02:57.136120 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 03:02:57.136644 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:da:ad", ip: ""} in network mk-default-k8s-diff-port-717075: {Iface:virbr4 ExpiryTime:2025-01-27 03:58:09 +0000 UTC Type:0 Mac:52:54:00:22:da:ad Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:default-k8s-diff-port-717075 Clientid:01:52:54:00:22:da:ad}
	I0127 03:02:57.136669 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined IP address 192.168.72.17 and MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 03:02:57.136702 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:da:ad", ip: ""} in network mk-default-k8s-diff-port-717075: {Iface:virbr4 ExpiryTime:2025-01-27 03:58:09 +0000 UTC Type:0 Mac:52:54:00:22:da:ad Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:default-k8s-diff-port-717075 Clientid:01:52:54:00:22:da:ad}
	I0127 03:02:57.136736 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined IP address 192.168.72.17 and MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 03:02:57.136747 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHPort
	I0127 03:02:57.136809 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:da:ad", ip: ""} in network mk-default-k8s-diff-port-717075: {Iface:virbr4 ExpiryTime:2025-01-27 03:58:09 +0000 UTC Type:0 Mac:52:54:00:22:da:ad Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:default-k8s-diff-port-717075 Clientid:01:52:54:00:22:da:ad}
	I0127 03:02:57.137008 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined IP address 192.168.72.17 and MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 03:02:57.136938 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHKeyPath
	I0127 03:02:57.137108 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHPort
	I0127 03:02:57.137179 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHUsername
	I0127 03:02:57.137309 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHKeyPath
	I0127 03:02:57.137376 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHPort
	I0127 03:02:57.137403 1119269 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/default-k8s-diff-port-717075/id_rsa Username:docker}
	I0127 03:02:57.137589 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHKeyPath
	I0127 03:02:57.137621 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHUsername
	I0127 03:02:57.137794 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHUsername
	I0127 03:02:57.138008 1119269 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/default-k8s-diff-port-717075/id_rsa Username:docker}
	I0127 03:02:57.138010 1119269 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/default-k8s-diff-port-717075/id_rsa Username:docker}
	I0127 03:02:57.152787 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35355
	I0127 03:02:57.153399 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:57.153967 1119269 main.go:141] libmachine: Using API Version  1
	I0127 03:02:57.154002 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:57.154377 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:57.154584 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetState
	I0127 03:02:57.156381 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .DriverName
	I0127 03:02:57.156603 1119269 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 03:02:57.156624 1119269 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 03:02:57.156649 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHHostname
	I0127 03:02:57.159499 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 03:02:57.159944 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:da:ad", ip: ""} in network mk-default-k8s-diff-port-717075: {Iface:virbr4 ExpiryTime:2025-01-27 03:58:09 +0000 UTC Type:0 Mac:52:54:00:22:da:ad Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:default-k8s-diff-port-717075 Clientid:01:52:54:00:22:da:ad}
	I0127 03:02:57.160261 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined IP address 192.168.72.17 and MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 03:02:57.160520 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHPort
	I0127 03:02:57.160684 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHKeyPath
	I0127 03:02:57.163248 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHUsername
	I0127 03:02:57.164348 1119269 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/default-k8s-diff-port-717075/id_rsa Username:docker}
	I0127 03:02:57.378051 1119269 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 03:02:57.433542 1119269 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-717075" to be "Ready" ...
	I0127 03:02:57.474874 1119269 node_ready.go:49] node "default-k8s-diff-port-717075" has status "Ready":"True"
	I0127 03:02:57.474911 1119269 node_ready.go:38] duration metric: took 41.327465ms for node "default-k8s-diff-port-717075" to be "Ready" ...
	I0127 03:02:57.474926 1119269 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:02:57.483255 1119269 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-htglq" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:57.519688 1119269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:02:57.542506 1119269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 03:02:57.549073 1119269 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 03:02:57.549102 1119269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 03:02:57.584535 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 03:02:57.584568 1119269 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 03:02:57.655673 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 03:02:57.655711 1119269 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 03:02:57.690996 1119269 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 03:02:57.691028 1119269 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 03:02:57.822313 1119269 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 03:02:57.822349 1119269 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 03:02:57.834363 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 03:02:57.834392 1119269 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 03:02:57.911077 1119269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 03:02:58.019919 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 03:02:58.019953 1119269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 03:02:58.212111 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 03:02:58.212145 1119269 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 03:02:58.309353 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 03:02:58.309381 1119269 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 03:02:58.378582 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 03:02:58.378611 1119269 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 03:02:58.444731 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 03:02:58.444762 1119269 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 03:02:58.506703 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 03:02:58.506745 1119269 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 03:02:58.584131 1119269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 03:02:58.850852 1119269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.331110115s)
	I0127 03:02:58.850948 1119269 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:58.850973 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
	I0127 03:02:58.850970 1119269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.308397522s)
	I0127 03:02:58.851017 1119269 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:58.851054 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
	I0127 03:02:58.851306 1119269 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:58.851328 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:58.851341 1119269 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:58.851348 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
	I0127 03:02:58.851426 1119269 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:58.851444 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:58.851465 1119269 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:58.851476 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
	I0127 03:02:58.851634 1119269 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:58.851650 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:58.851693 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | Closing plugin on server side
	I0127 03:02:58.851740 1119269 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:58.851762 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | Closing plugin on server side
	I0127 03:02:58.851765 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:58.886972 1119269 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:58.887006 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
	I0127 03:02:58.887346 1119269 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:58.887369 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:59.219464 1119269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.308329693s)
	I0127 03:02:59.219531 1119269 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:59.219552 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
	I0127 03:02:59.219946 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | Closing plugin on server side
	I0127 03:02:59.220003 1119269 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:59.220024 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:59.220045 1119269 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:59.220059 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
	I0127 03:02:59.220303 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | Closing plugin on server side
	I0127 03:02:59.220340 1119269 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:59.220349 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:59.220364 1119269 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-717075"
	I0127 03:02:59.493877 1119269 pod_ready.go:93] pod "coredns-668d6bf9bc-htglq" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:59.493919 1119269 pod_ready.go:82] duration metric: took 2.010631788s for pod "coredns-668d6bf9bc-htglq" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:59.493932 1119269 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-pwz9n" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:00.135755 1119269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.551568283s)
	I0127 03:03:00.135819 1119269 main.go:141] libmachine: Making call to close driver server
	I0127 03:03:00.135831 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
	I0127 03:03:00.136153 1119269 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:03:00.136171 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:03:00.136179 1119269 main.go:141] libmachine: Making call to close driver server
	I0127 03:03:00.136187 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
	I0127 03:03:00.136181 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | Closing plugin on server side
	I0127 03:03:00.136446 1119269 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:03:00.136459 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:03:00.137984 1119269 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-717075 addons enable metrics-server
	
	I0127 03:03:00.139476 1119269 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 03:03:00.140933 1119269 addons.go:514] duration metric: took 3.06657827s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 03:03:01.501546 1119269 pod_ready.go:103] pod "coredns-668d6bf9bc-pwz9n" in "kube-system" namespace has status "Ready":"False"
	I0127 03:03:04.000116 1119269 pod_ready.go:103] pod "coredns-668d6bf9bc-pwz9n" in "kube-system" namespace has status "Ready":"False"
	I0127 03:03:05.002068 1119269 pod_ready.go:93] pod "coredns-668d6bf9bc-pwz9n" in "kube-system" namespace has status "Ready":"True"
	I0127 03:03:05.002134 1119269 pod_ready.go:82] duration metric: took 5.508188953s for pod "coredns-668d6bf9bc-pwz9n" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:05.002149 1119269 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:05.007136 1119269 pod_ready.go:93] pod "etcd-default-k8s-diff-port-717075" in "kube-system" namespace has status "Ready":"True"
	I0127 03:03:05.007163 1119269 pod_ready.go:82] duration metric: took 5.003743ms for pod "etcd-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:05.007173 1119269 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:06.013821 1119269 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-717075" in "kube-system" namespace has status "Ready":"True"
	I0127 03:03:06.013847 1119269 pod_ready.go:82] duration metric: took 1.006667196s for pod "kube-apiserver-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:06.013860 1119269 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:06.018661 1119269 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-717075" in "kube-system" namespace has status "Ready":"True"
	I0127 03:03:06.018683 1119269 pod_ready.go:82] duration metric: took 4.814763ms for pod "kube-controller-manager-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:06.018694 1119269 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nlkhv" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:06.022482 1119269 pod_ready.go:93] pod "kube-proxy-nlkhv" in "kube-system" namespace has status "Ready":"True"
	I0127 03:03:06.022500 1119269 pod_ready.go:82] duration metric: took 3.79842ms for pod "kube-proxy-nlkhv" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:06.022512 1119269 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:06.197960 1119269 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-717075" in "kube-system" namespace has status "Ready":"True"
	I0127 03:03:06.197986 1119269 pod_ready.go:82] duration metric: took 175.467759ms for pod "kube-scheduler-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:06.197995 1119269 pod_ready.go:39] duration metric: took 8.723057571s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:03:06.198012 1119269 api_server.go:52] waiting for apiserver process to appear ...
	I0127 03:03:06.198073 1119269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:03:06.215210 1119269 api_server.go:72] duration metric: took 9.140900628s to wait for apiserver process to appear ...
	I0127 03:03:06.215249 1119269 api_server.go:88] waiting for apiserver healthz status ...
	I0127 03:03:06.215273 1119269 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8444/healthz ...
	I0127 03:03:06.219951 1119269 api_server.go:279] https://192.168.72.17:8444/healthz returned 200:
	ok
	I0127 03:03:06.220901 1119269 api_server.go:141] control plane version: v1.32.1
	I0127 03:03:06.220922 1119269 api_server.go:131] duration metric: took 5.666132ms to wait for apiserver health ...
	I0127 03:03:06.220929 1119269 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 03:03:06.402128 1119269 system_pods.go:59] 9 kube-system pods found
	I0127 03:03:06.402165 1119269 system_pods.go:61] "coredns-668d6bf9bc-htglq" [2d4500a2-7bc9-4c25-af55-3c20257065c2] Running
	I0127 03:03:06.402172 1119269 system_pods.go:61] "coredns-668d6bf9bc-pwz9n" [cf6b7c7c-59eb-4901-88ba-a6e4556ddd4c] Running
	I0127 03:03:06.402177 1119269 system_pods.go:61] "etcd-default-k8s-diff-port-717075" [50fac615-6926-4023-8467-fa0c3fec39b2] Running
	I0127 03:03:06.402181 1119269 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-717075" [f86307a0-5994-4178-af8a-43613ed2bd63] Running
	I0127 03:03:06.402186 1119269 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-717075" [543f1b9a-da5a-4963-adc0-3bb2c88f2de0] Running
	I0127 03:03:06.402191 1119269 system_pods.go:61] "kube-proxy-nlkhv" [57c52d4f-937f-4fc8-98dd-9aa0531f8d17] Running
	I0127 03:03:06.402197 1119269 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-717075" [bb54f953-7c1f-4ce8-a590-7d029dcfea24] Running
	I0127 03:03:06.402205 1119269 system_pods.go:61] "metrics-server-f79f97bbb-fthnn" [fb8e4d08-fb1f-49a5-8984-44e975174502] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 03:03:06.402211 1119269 system_pods.go:61] "storage-provisioner" [0a7c6b15-4ec5-46cf-8f6e-d98c292af196] Running
	I0127 03:03:06.402225 1119269 system_pods.go:74] duration metric: took 181.288367ms to wait for pod list to return data ...
	I0127 03:03:06.402236 1119269 default_sa.go:34] waiting for default service account to be created ...
	I0127 03:03:06.598976 1119269 default_sa.go:45] found service account: "default"
	I0127 03:03:06.599007 1119269 default_sa.go:55] duration metric: took 196.76041ms for default service account to be created ...
	I0127 03:03:06.599017 1119269 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 03:03:06.802139 1119269 system_pods.go:87] 9 kube-system pods found
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	8068b689860d2       523cad1a4df73       13 seconds ago      Exited              dashboard-metrics-scraper   9                   38b5d9c31bb05       dashboard-metrics-scraper-86c6bf9756-qrf2m
	dbf5d057b3871       07655ddf2eebe       21 minutes ago      Running             kubernetes-dashboard        0                   ca505afd6e277       kubernetes-dashboard-7779f9b69b-2mzhv
	9a9fccb49de43       6e38f40d628db       21 minutes ago      Running             storage-provisioner         0                   f5fbac96fd512       storage-provisioner
	c1703f883b26c       c69fa2e9cbf5f       21 minutes ago      Running             coredns                     0                   6e840f6bb7e90       coredns-668d6bf9bc-mbkl2
	553d1ad36bdff       c69fa2e9cbf5f       21 minutes ago      Running             coredns                     0                   cf434efce9d5d       coredns-668d6bf9bc-n5wn4
	472056f0bfd28       e29f9c7391fd9       21 minutes ago      Running             kube-proxy                  0                   b345a43b52939       kube-proxy-kwqqr
	0b792de4a2224       a9e7e6b294baf       21 minutes ago      Running             etcd                        2                   bfae9230790e2       etcd-embed-certs-264552
	436b1741e0235       2b0d6572d062c       21 minutes ago      Running             kube-scheduler              2                   7b1f706dd6ee6       kube-scheduler-embed-certs-264552
	376408ceda863       95c0bda56fc4d       21 minutes ago      Running             kube-apiserver              2                   0270a79ccb172       kube-apiserver-embed-certs-264552
	9fb8cdffe822d       019ee182b58e2       21 minutes ago      Running             kube-controller-manager     2                   3899680ef30a9       kube-controller-manager-embed-certs-264552
	
	
	==> containerd <==
	Jan 27 03:18:46 embed-certs-264552 containerd[561]: time="2025-01-27T03:18:46.783052134Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Jan 27 03:18:46 embed-certs-264552 containerd[561]: time="2025-01-27T03:18:46.783154218Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 27 03:19:03 embed-certs-264552 containerd[561]: time="2025-01-27T03:19:03.773615294Z" level=info msg="CreateContainer within sandbox \"38b5d9c31bb05a8f57b868f78d964b92af43cffdfa2af2b38e98c683aaf69a7a\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,}"
	Jan 27 03:19:03 embed-certs-264552 containerd[561]: time="2025-01-27T03:19:03.802215329Z" level=info msg="CreateContainer within sandbox \"38b5d9c31bb05a8f57b868f78d964b92af43cffdfa2af2b38e98c683aaf69a7a\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,} returns container id \"63689f99955fea283fec4e18e6a924a39aac957c0dcacea23fce22e7ee1979e4\""
	Jan 27 03:19:03 embed-certs-264552 containerd[561]: time="2025-01-27T03:19:03.802871403Z" level=info msg="StartContainer for \"63689f99955fea283fec4e18e6a924a39aac957c0dcacea23fce22e7ee1979e4\""
	Jan 27 03:19:03 embed-certs-264552 containerd[561]: time="2025-01-27T03:19:03.892049094Z" level=info msg="StartContainer for \"63689f99955fea283fec4e18e6a924a39aac957c0dcacea23fce22e7ee1979e4\" returns successfully"
	Jan 27 03:19:03 embed-certs-264552 containerd[561]: time="2025-01-27T03:19:03.952719901Z" level=info msg="shim disconnected" id=63689f99955fea283fec4e18e6a924a39aac957c0dcacea23fce22e7ee1979e4 namespace=k8s.io
	Jan 27 03:19:03 embed-certs-264552 containerd[561]: time="2025-01-27T03:19:03.952791770Z" level=warning msg="cleaning up after shim disconnected" id=63689f99955fea283fec4e18e6a924a39aac957c0dcacea23fce22e7ee1979e4 namespace=k8s.io
	Jan 27 03:19:03 embed-certs-264552 containerd[561]: time="2025-01-27T03:19:03.952804217Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 27 03:19:04 embed-certs-264552 containerd[561]: time="2025-01-27T03:19:04.527296884Z" level=info msg="RemoveContainer for \"49d693fb5102758b956a3e14ccb4b846c423c348a22d7ced7fd910355a78447c\""
	Jan 27 03:19:04 embed-certs-264552 containerd[561]: time="2025-01-27T03:19:04.534312209Z" level=info msg="RemoveContainer for \"49d693fb5102758b956a3e14ccb4b846c423c348a22d7ced7fd910355a78447c\" returns successfully"
	Jan 27 03:23:57 embed-certs-264552 containerd[561]: time="2025-01-27T03:23:57.770451978Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 03:23:57 embed-certs-264552 containerd[561]: time="2025-01-27T03:23:57.779674633Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Jan 27 03:23:57 embed-certs-264552 containerd[561]: time="2025-01-27T03:23:57.781447858Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Jan 27 03:23:57 embed-certs-264552 containerd[561]: time="2025-01-27T03:23:57.781537660Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 27 03:24:07 embed-certs-264552 containerd[561]: time="2025-01-27T03:24:07.772705420Z" level=info msg="CreateContainer within sandbox \"38b5d9c31bb05a8f57b868f78d964b92af43cffdfa2af2b38e98c683aaf69a7a\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,}"
	Jan 27 03:24:07 embed-certs-264552 containerd[561]: time="2025-01-27T03:24:07.796689502Z" level=info msg="CreateContainer within sandbox \"38b5d9c31bb05a8f57b868f78d964b92af43cffdfa2af2b38e98c683aaf69a7a\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,} returns container id \"8068b689860d2ce944cefafce36e777d3762857f8c7bb03a243e2a88579aa6ab\""
	Jan 27 03:24:07 embed-certs-264552 containerd[561]: time="2025-01-27T03:24:07.797480305Z" level=info msg="StartContainer for \"8068b689860d2ce944cefafce36e777d3762857f8c7bb03a243e2a88579aa6ab\""
	Jan 27 03:24:07 embed-certs-264552 containerd[561]: time="2025-01-27T03:24:07.885329088Z" level=info msg="StartContainer for \"8068b689860d2ce944cefafce36e777d3762857f8c7bb03a243e2a88579aa6ab\" returns successfully"
	Jan 27 03:24:07 embed-certs-264552 containerd[561]: time="2025-01-27T03:24:07.935844272Z" level=info msg="shim disconnected" id=8068b689860d2ce944cefafce36e777d3762857f8c7bb03a243e2a88579aa6ab namespace=k8s.io
	Jan 27 03:24:07 embed-certs-264552 containerd[561]: time="2025-01-27T03:24:07.935964590Z" level=warning msg="cleaning up after shim disconnected" id=8068b689860d2ce944cefafce36e777d3762857f8c7bb03a243e2a88579aa6ab namespace=k8s.io
	Jan 27 03:24:07 embed-certs-264552 containerd[561]: time="2025-01-27T03:24:07.935975277Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 27 03:24:07 embed-certs-264552 containerd[561]: time="2025-01-27T03:24:07.949957989Z" level=warning msg="cleanup warnings time=\"2025-01-27T03:24:07Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
	Jan 27 03:24:08 embed-certs-264552 containerd[561]: time="2025-01-27T03:24:08.270082104Z" level=info msg="RemoveContainer for \"63689f99955fea283fec4e18e6a924a39aac957c0dcacea23fce22e7ee1979e4\""
	Jan 27 03:24:08 embed-certs-264552 containerd[561]: time="2025-01-27T03:24:08.277514957Z" level=info msg="RemoveContainer for \"63689f99955fea283fec4e18e6a924a39aac957c0dcacea23fce22e7ee1979e4\" returns successfully"
	
	
	==> coredns [553d1ad36bdff6877da98c46d58a1493e33dff4c03ab468bf48d924d119061fc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [c1703f883b26cfee427071b06eba83d5d85ac3bc88b7e06ee5ac98342d781203] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-264552
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-264552
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95
	                    minikube.k8s.io/name=embed-certs-264552
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T03_02_32_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 03:02:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-264552
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 03:24:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 03:23:27 +0000   Mon, 27 Jan 2025 03:02:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 03:23:27 +0000   Mon, 27 Jan 2025 03:02:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 03:23:27 +0000   Mon, 27 Jan 2025 03:02:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 03:23:27 +0000   Mon, 27 Jan 2025 03:02:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.145
	  Hostname:    embed-certs-264552
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 884a757f508741248db8b3db23666dbd
	  System UUID:                884a757f-5087-4124-8db8-b3db23666dbd
	  Boot ID:                    4b0e3f05-fb35-43a5-87bf-d4e4757507bb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-mbkl2                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 coredns-668d6bf9bc-n5wn4                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-embed-certs-264552                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-embed-certs-264552             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-embed-certs-264552    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-kwqqr                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-embed-certs-264552             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-f79f97bbb-6dg5x                100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-qrf2m    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-2mzhv         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 21m   kube-proxy       
	  Normal  Starting                 21m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m   kubelet          Node embed-certs-264552 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m   kubelet          Node embed-certs-264552 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m   kubelet          Node embed-certs-264552 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m   node-controller  Node embed-certs-264552 event: Registered Node embed-certs-264552 in Controller
	
	
	==> dmesg <==
	[  +0.041827] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.981324] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.931815] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.623157] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.028085] systemd-fstab-generator[484]: Ignoring "noauto" option for root device
	[  +0.059290] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070201] systemd-fstab-generator[496]: Ignoring "noauto" option for root device
	[  +0.191497] systemd-fstab-generator[510]: Ignoring "noauto" option for root device
	[  +0.124865] systemd-fstab-generator[522]: Ignoring "noauto" option for root device
	[  +0.303359] systemd-fstab-generator[553]: Ignoring "noauto" option for root device
	[  +1.125529] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[Jan27 02:58] systemd-fstab-generator[756]: Ignoring "noauto" option for root device
	[  +1.206628] kauditd_printk_skb: 250 callbacks suppressed
	[  +5.190103] kauditd_printk_skb: 49 callbacks suppressed
	[ +13.128394] kauditd_printk_skb: 48 callbacks suppressed
	[Jan27 03:02] systemd-fstab-generator[3081]: Ignoring "noauto" option for root device
	[  +1.508497] kauditd_printk_skb: 47 callbacks suppressed
	[  +6.086187] systemd-fstab-generator[3463]: Ignoring "noauto" option for root device
	[  +0.093803] kauditd_printk_skb: 30 callbacks suppressed
	[  +4.967302] systemd-fstab-generator[3569]: Ignoring "noauto" option for root device
	[  +0.130862] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.785943] kauditd_printk_skb: 112 callbacks suppressed
	[  +5.720528] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [0b792de4a2224e88f380d1ed96f713c75f94d2f5d5bcd976477a6f631b2eba03] <==
	{"level":"info","ts":"2025-01-27T03:02:26.634172Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-01-27T03:02:26.634951Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T03:02:26.638494Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.145:2379"}
	{"level":"info","ts":"2025-01-27T03:02:26.637519Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T03:02:26.641853Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-27T03:02:44.575467Z","caller":"traceutil/trace.go:171","msg":"trace[1707159098] transaction","detail":"{read_only:false; response_revision:488; number_of_response:1; }","duration":"370.69355ms","start":"2025-01-27T03:02:44.204756Z","end":"2025-01-27T03:02:44.575450Z","steps":["trace[1707159098] 'process raft request'  (duration: 370.264252ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T03:02:44.575203Z","caller":"traceutil/trace.go:171","msg":"trace[1297114169] linearizableReadLoop","detail":"{readStateIndex:502; appliedIndex:501; }","duration":"181.955188ms","start":"2025-01-27T03:02:44.393219Z","end":"2025-01-27T03:02:44.575175Z","steps":["trace[1297114169] 'read index received'  (duration: 181.782758ms)","trace[1297114169] 'applied index is now lower than readState.Index'  (duration: 171.5µs)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T03:02:44.576622Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T03:02:44.204734Z","time spent":"370.921664ms","remote":"127.0.0.1:45564","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:486 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-01-27T03:02:44.576717Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.492188ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-embed-certs-264552\" limit:1 ","response":"range_response_count:1 size:6568"}
	{"level":"info","ts":"2025-01-27T03:02:44.576799Z","caller":"traceutil/trace.go:171","msg":"trace[2098623624] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-embed-certs-264552; range_end:; response_count:1; response_revision:488; }","duration":"183.604296ms","start":"2025-01-27T03:02:44.393182Z","end":"2025-01-27T03:02:44.576786Z","steps":["trace[2098623624] 'agreement among raft nodes before linearized reading'  (duration: 183.456816ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T03:02:46.777018Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.309533ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T03:02:46.777092Z","caller":"traceutil/trace.go:171","msg":"trace[242269888] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:501; }","duration":"114.424366ms","start":"2025-01-27T03:02:46.662653Z","end":"2025-01-27T03:02:46.777077Z","steps":["trace[242269888] 'range keys from in-memory index tree'  (duration: 114.230405ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T03:03:16.219264Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.317738ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13477466799324363488 > lease_revoke:<id:3b0994a5b5d97a46>","response":"size:27"}
	{"level":"info","ts":"2025-01-27T03:03:16.219561Z","caller":"traceutil/trace.go:171","msg":"trace[962407020] linearizableReadLoop","detail":"{readStateIndex:574; appliedIndex:573; }","duration":"157.420014ms","start":"2025-01-27T03:03:16.062073Z","end":"2025-01-27T03:03:16.219493Z","steps":["trace[962407020] 'read index received'  (duration: 26.148257ms)","trace[962407020] 'applied index is now lower than readState.Index'  (duration: 131.270788ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T03:03:16.219824Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"157.669886ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T03:03:16.219923Z","caller":"traceutil/trace.go:171","msg":"trace[825371841] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:553; }","duration":"157.827621ms","start":"2025-01-27T03:03:16.062039Z","end":"2025-01-27T03:03:16.219866Z","steps":["trace[825371841] 'agreement among raft nodes before linearized reading'  (duration: 157.623027ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T03:12:26.831254Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":840}
	{"level":"info","ts":"2025-01-27T03:12:26.872752Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":840,"took":"39.775471ms","hash":1267610748,"current-db-size-bytes":2961408,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":2961408,"current-db-size-in-use":"3.0 MB"}
	{"level":"info","ts":"2025-01-27T03:12:26.873040Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1267610748,"revision":840,"compact-revision":-1}
	{"level":"info","ts":"2025-01-27T03:17:26.839505Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1101}
	{"level":"info","ts":"2025-01-27T03:17:26.845027Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1101,"took":"4.820228ms","hash":2937543524,"current-db-size-bytes":2961408,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":1810432,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-27T03:17:26.845104Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2937543524,"revision":1101,"compact-revision":840}
	{"level":"info","ts":"2025-01-27T03:22:26.846170Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1352}
	{"level":"info","ts":"2025-01-27T03:22:26.851421Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1352,"took":"4.117196ms","hash":2601259123,"current-db-size-bytes":2961408,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":1777664,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-27T03:22:26.851710Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2601259123,"revision":1352,"compact-revision":1101}
	
	
	==> kernel <==
	 03:24:22 up 26 min,  0 users,  load average: 0.29, 0.20, 0.23
	Linux embed-certs-264552 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [376408ceda8633abf846c20da93a75a1848da294f2e99a43f6612a7fe65b2651] <==
	I0127 03:20:29.785858       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 03:20:29.787089       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 03:22:28.784382       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 03:22:28.784854       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0127 03:22:29.787020       1 handler_proxy.go:99] no RequestInfo found in the context
	W0127 03:22:29.787021       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 03:22:29.787444       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0127 03:22:29.787929       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 03:22:29.788812       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 03:22:29.789090       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 03:23:29.789288       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 03:23:29.789389       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0127 03:23:29.789287       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 03:23:29.789652       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 03:23:29.790750       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 03:23:29.790782       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [9fb8cdffe822d28fb5cc151ab9b468daeedbde56c5eb7515149f7a0ce15d2e98] <==
	I0127 03:19:15.785772       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="69.659µs"
	E0127 03:19:35.537852       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:19:35.605326       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:20:05.545257       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:20:05.611596       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:20:35.552418       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:20:35.619632       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:21:05.559688       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:21:05.626569       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:21:35.566780       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:21:35.634224       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:22:05.572602       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:22:05.644387       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:22:35.579247       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:22:35.653586       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:23:05.585527       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:23:05.661461       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 03:23:27.044469       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="embed-certs-264552"
	E0127 03:23:35.591834       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:23:35.670097       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:24:05.597781       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:24:05.678134       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 03:24:08.287184       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="58.502µs"
	I0127 03:24:10.783081       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="68.181µs"
	I0127 03:24:11.126675       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="39.663µs"
	
	
	==> kube-proxy [472056f0bfd287030c3c4c2e3932eea8217713159ed1ca52166805905004b992] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 03:02:37.366198       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 03:02:37.419457       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.145"]
	E0127 03:02:37.419557       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 03:02:37.653019       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 03:02:37.653357       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 03:02:37.653394       1 server_linux.go:170] "Using iptables Proxier"
	I0127 03:02:37.665642       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 03:02:37.666028       1 server.go:497] "Version info" version="v1.32.1"
	I0127 03:02:37.666041       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 03:02:37.668596       1 config.go:199] "Starting service config controller"
	I0127 03:02:37.668618       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 03:02:37.668669       1 config.go:105] "Starting endpoint slice config controller"
	I0127 03:02:37.668674       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 03:02:37.669159       1 config.go:329] "Starting node config controller"
	I0127 03:02:37.669168       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 03:02:37.770004       1 shared_informer.go:320] Caches are synced for node config
	I0127 03:02:37.770037       1 shared_informer.go:320] Caches are synced for service config
	I0127 03:02:37.770046       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [436b1741e023587fbeb244c54d5ded9126e21d4a7a01f50171253524803fbd4d] <==
	W0127 03:02:29.619825       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 03:02:29.620170       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:29.631072       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0127 03:02:29.631423       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:29.736391       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0127 03:02:29.737305       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:29.782416       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 03:02:29.782496       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:29.786238       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0127 03:02:29.786618       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:29.886482       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0127 03:02:29.887113       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:29.985506       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 03:02:29.985971       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0127 03:02:30.045056       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 03:02:30.045364       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:30.110687       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 03:02:30.113337       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:30.131472       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 03:02:30.131594       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:30.148969       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0127 03:02:30.149445       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:30.171979       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0127 03:02:30.172058       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 03:02:32.573445       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 03:23:19 embed-certs-264552 kubelet[3470]: E0127 03:23:19.769941    3470 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-6dg5x" podUID="4b9cd5d7-1055-45ea-8ac9-1a91b9246c0e"
	Jan 27 03:23:29 embed-certs-264552 kubelet[3470]: I0127 03:23:29.768304    3470 scope.go:117] "RemoveContainer" containerID="63689f99955fea283fec4e18e6a924a39aac957c0dcacea23fce22e7ee1979e4"
	Jan 27 03:23:29 embed-certs-264552 kubelet[3470]: E0127 03:23:29.768502    3470 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-qrf2m_kubernetes-dashboard(6e45cf44-09c5-48fb-9409-bf15435b1ee7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-qrf2m" podUID="6e45cf44-09c5-48fb-9409-bf15435b1ee7"
	Jan 27 03:23:31 embed-certs-264552 kubelet[3470]: E0127 03:23:31.792075    3470 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 03:23:31 embed-certs-264552 kubelet[3470]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 03:23:31 embed-certs-264552 kubelet[3470]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 03:23:31 embed-certs-264552 kubelet[3470]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 03:23:31 embed-certs-264552 kubelet[3470]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 03:23:33 embed-certs-264552 kubelet[3470]: E0127 03:23:33.770058    3470 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-6dg5x" podUID="4b9cd5d7-1055-45ea-8ac9-1a91b9246c0e"
	Jan 27 03:23:42 embed-certs-264552 kubelet[3470]: I0127 03:23:42.769427    3470 scope.go:117] "RemoveContainer" containerID="63689f99955fea283fec4e18e6a924a39aac957c0dcacea23fce22e7ee1979e4"
	Jan 27 03:23:42 embed-certs-264552 kubelet[3470]: E0127 03:23:42.770798    3470 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-qrf2m_kubernetes-dashboard(6e45cf44-09c5-48fb-9409-bf15435b1ee7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-qrf2m" podUID="6e45cf44-09c5-48fb-9409-bf15435b1ee7"
	Jan 27 03:23:45 embed-certs-264552 kubelet[3470]: E0127 03:23:45.770647    3470 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-6dg5x" podUID="4b9cd5d7-1055-45ea-8ac9-1a91b9246c0e"
	Jan 27 03:23:54 embed-certs-264552 kubelet[3470]: I0127 03:23:54.768185    3470 scope.go:117] "RemoveContainer" containerID="63689f99955fea283fec4e18e6a924a39aac957c0dcacea23fce22e7ee1979e4"
	Jan 27 03:23:54 embed-certs-264552 kubelet[3470]: E0127 03:23:54.768701    3470 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-qrf2m_kubernetes-dashboard(6e45cf44-09c5-48fb-9409-bf15435b1ee7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-qrf2m" podUID="6e45cf44-09c5-48fb-9409-bf15435b1ee7"
	Jan 27 03:23:57 embed-certs-264552 kubelet[3470]: E0127 03:23:57.781851    3470 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 27 03:23:57 embed-certs-264552 kubelet[3470]: E0127 03:23:57.782398    3470 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 27 03:23:57 embed-certs-264552 kubelet[3470]: E0127 03:23:57.783084    3470 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lt575,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation
:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Std
in:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-f79f97bbb-6dg5x_kube-system(4b9cd5d7-1055-45ea-8ac9-1a91b9246c0e): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Jan 27 03:23:57 embed-certs-264552 kubelet[3470]: E0127 03:23:57.784463    3470 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-6dg5x" podUID="4b9cd5d7-1055-45ea-8ac9-1a91b9246c0e"
	Jan 27 03:24:07 embed-certs-264552 kubelet[3470]: I0127 03:24:07.769154    3470 scope.go:117] "RemoveContainer" containerID="63689f99955fea283fec4e18e6a924a39aac957c0dcacea23fce22e7ee1979e4"
	Jan 27 03:24:08 embed-certs-264552 kubelet[3470]: I0127 03:24:08.267828    3470 scope.go:117] "RemoveContainer" containerID="63689f99955fea283fec4e18e6a924a39aac957c0dcacea23fce22e7ee1979e4"
	Jan 27 03:24:08 embed-certs-264552 kubelet[3470]: I0127 03:24:08.268262    3470 scope.go:117] "RemoveContainer" containerID="8068b689860d2ce944cefafce36e777d3762857f8c7bb03a243e2a88579aa6ab"
	Jan 27 03:24:08 embed-certs-264552 kubelet[3470]: E0127 03:24:08.268430    3470 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-qrf2m_kubernetes-dashboard(6e45cf44-09c5-48fb-9409-bf15435b1ee7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-qrf2m" podUID="6e45cf44-09c5-48fb-9409-bf15435b1ee7"
	Jan 27 03:24:10 embed-certs-264552 kubelet[3470]: E0127 03:24:10.769264    3470 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-6dg5x" podUID="4b9cd5d7-1055-45ea-8ac9-1a91b9246c0e"
	Jan 27 03:24:11 embed-certs-264552 kubelet[3470]: I0127 03:24:11.112353    3470 scope.go:117] "RemoveContainer" containerID="8068b689860d2ce944cefafce36e777d3762857f8c7bb03a243e2a88579aa6ab"
	Jan 27 03:24:11 embed-certs-264552 kubelet[3470]: E0127 03:24:11.112821    3470 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-qrf2m_kubernetes-dashboard(6e45cf44-09c5-48fb-9409-bf15435b1ee7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-qrf2m" podUID="6e45cf44-09c5-48fb-9409-bf15435b1ee7"
	
	
	==> kubernetes-dashboard [dbf5d057b3871221a002f2426584a3d1e47a8688ac78afb10ea3fe90851084d4] <==
	2025/01/27 03:12:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:12:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:13:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:13:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:14:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:14:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:15:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:15:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:16:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:16:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:17:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:17:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:18:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:18:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:19:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:19:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:20:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:20:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:21:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:21:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:22:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:22:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:23:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:23:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:24:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [9a9fccb49de433ec79d9e74f203c1b37701ee94e2eeb080a254486298aa05ef4] <==
	I0127 03:02:39.825023       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 03:02:39.992137       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 03:02:39.992220       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 03:02:40.029051       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 03:02:40.030294       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-264552_e6786e49-36e6-4541-96a9-f5240f005732!
	I0127 03:02:40.031435       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"01071346-4963-4253-9d11-c4e028dd666b", APIVersion:"v1", ResourceVersion:"467", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-264552_e6786e49-36e6-4541-96a9-f5240f005732 became leader
	I0127 03:02:40.133063       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-264552_e6786e49-36e6-4541-96a9-f5240f005732!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-264552 -n embed-certs-264552
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-264552 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-6dg5x
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-264552 describe pod metrics-server-f79f97bbb-6dg5x
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-264552 describe pod metrics-server-f79f97bbb-6dg5x: exit status 1 (74.647355ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-6dg5x" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-264552 describe pod metrics-server-f79f97bbb-6dg5x: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (1613.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (1614.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-717075 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 02:57:32.137846 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/custom-flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:57:32.216417 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/kindnet-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:57:32.222791 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/kindnet-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:57:32.234112 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/kindnet-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:57:32.255472 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/kindnet-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:57:32.296901 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/kindnet-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:57:32.378403 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/kindnet-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:57:32.540056 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/kindnet-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:57:32.862344 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/kindnet-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:57:33.504658 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/kindnet-541715/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p default-k8s-diff-port-717075 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: signal: killed (26m52.563146157s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-717075] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20316
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20316-1057178/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-1057178/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "default-k8s-diff-port-717075" primary control-plane node in "default-k8s-diff-port-717075" cluster
	* Restarting existing kvm2 VM for "default-k8s-diff-port-717075" ...
	* Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-717075 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 02:57:30.000066 1119269 out.go:345] Setting OutFile to fd 1 ...
	I0127 02:57:30.000244 1119269 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:57:30.000266 1119269 out.go:358] Setting ErrFile to fd 2...
	I0127 02:57:30.000273 1119269 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:57:30.000728 1119269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-1057178/.minikube/bin
	I0127 02:57:30.001712 1119269 out.go:352] Setting JSON to false
	I0127 02:57:30.003067 1119269 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":13197,"bootTime":1737933453,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 02:57:30.003189 1119269 start.go:139] virtualization: kvm guest
	I0127 02:57:30.005022 1119269 out.go:177] * [default-k8s-diff-port-717075] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 02:57:30.006479 1119269 out.go:177]   - MINIKUBE_LOCATION=20316
	I0127 02:57:30.006509 1119269 notify.go:220] Checking for updates...
	I0127 02:57:30.008758 1119269 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 02:57:30.009897 1119269 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20316-1057178/kubeconfig
	I0127 02:57:30.011278 1119269 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-1057178/.minikube
	I0127 02:57:30.012414 1119269 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 02:57:30.013548 1119269 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 02:57:30.015235 1119269 config.go:182] Loaded profile config "default-k8s-diff-port-717075": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 02:57:30.015885 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 02:57:30.015965 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:57:30.036971 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41523
	I0127 02:57:30.037414 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:57:30.038104 1119269 main.go:141] libmachine: Using API Version  1
	I0127 02:57:30.038126 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:57:30.038563 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:57:30.038788 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .DriverName
	I0127 02:57:30.039056 1119269 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 02:57:30.039503 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 02:57:30.039556 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:57:30.055428 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33385
	I0127 02:57:30.056015 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:57:30.056598 1119269 main.go:141] libmachine: Using API Version  1
	I0127 02:57:30.056614 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:57:30.057147 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:57:30.057330 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .DriverName
	I0127 02:57:30.091514 1119269 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 02:57:30.092720 1119269 start.go:297] selected driver: kvm2
	I0127 02:57:30.092740 1119269 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-717075 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8
s-diff-port-717075 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.17 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequest
ed:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 02:57:30.092891 1119269 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 02:57:30.093652 1119269 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:57:30.094546 1119269 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20316-1057178/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 02:57:30.110506 1119269 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 02:57:30.110917 1119269 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 02:57:30.110956 1119269 cni.go:84] Creating CNI manager for ""
	I0127 02:57:30.111012 1119269 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 02:57:30.111066 1119269 start.go:340] cluster config:
	{Name:default-k8s-diff-port-717075 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-717075 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.17 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/
jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 02:57:30.111179 1119269 iso.go:125] acquiring lock: {Name:mkd30bc9d11f9170e89ad95ce7ba25fa6d1e04f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:57:30.112772 1119269 out.go:177] * Starting "default-k8s-diff-port-717075" primary control-plane node in "default-k8s-diff-port-717075" cluster
	I0127 02:57:30.113746 1119269 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 02:57:30.113785 1119269 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
	I0127 02:57:30.113798 1119269 cache.go:56] Caching tarball of preloaded images
	I0127 02:57:30.113894 1119269 preload.go:172] Found /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 02:57:30.113907 1119269 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
	I0127 02:57:30.114012 1119269 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/default-k8s-diff-port-717075/config.json ...
	I0127 02:57:30.114204 1119269 start.go:360] acquireMachinesLock for default-k8s-diff-port-717075: {Name:mka8dc154c517d64837d06e2f84f8bddd0b82c58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 02:57:56.834056 1119269 start.go:364] duration metric: took 26.719803903s to acquireMachinesLock for "default-k8s-diff-port-717075"
	I0127 02:57:56.834121 1119269 start.go:96] Skipping create...Using existing machine configuration
	I0127 02:57:56.834143 1119269 fix.go:54] fixHost starting: 
	I0127 02:57:56.834601 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 02:57:56.834658 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:57:56.852175 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43739
	I0127 02:57:56.852724 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:57:56.853313 1119269 main.go:141] libmachine: Using API Version  1
	I0127 02:57:56.853361 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:57:56.853713 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:57:56.853914 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .DriverName
	I0127 02:57:56.854038 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetState
	I0127 02:57:56.855605 1119269 fix.go:112] recreateIfNeeded on default-k8s-diff-port-717075: state=Stopped err=<nil>
	I0127 02:57:56.855644 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .DriverName
	W0127 02:57:56.855812 1119269 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 02:57:56.857880 1119269 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-717075" ...
	I0127 02:57:56.859395 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Start
	I0127 02:57:56.859617 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) starting domain...
	I0127 02:57:56.859634 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) ensuring networks are active...
	I0127 02:57:56.860596 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Ensuring network default is active
	I0127 02:57:56.861000 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Ensuring network mk-default-k8s-diff-port-717075 is active
	I0127 02:57:56.861439 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) getting domain XML...
	I0127 02:57:56.862301 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) creating domain...
	I0127 02:57:58.162075 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) waiting for IP...
	I0127 02:57:58.163009 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:57:58.163491 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | unable to find current IP address of domain default-k8s-diff-port-717075 in network mk-default-k8s-diff-port-717075
	I0127 02:57:58.163567 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | I0127 02:57:58.163490 1119628 retry.go:31] will retry after 192.033542ms: waiting for domain to come up
	I0127 02:57:58.357038 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:57:58.357543 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | unable to find current IP address of domain default-k8s-diff-port-717075 in network mk-default-k8s-diff-port-717075
	I0127 02:57:58.357578 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | I0127 02:57:58.357476 1119628 retry.go:31] will retry after 365.426329ms: waiting for domain to come up
	I0127 02:57:58.725284 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:57:58.725837 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | unable to find current IP address of domain default-k8s-diff-port-717075 in network mk-default-k8s-diff-port-717075
	I0127 02:57:58.725882 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | I0127 02:57:58.725793 1119628 retry.go:31] will retry after 341.392186ms: waiting for domain to come up
	I0127 02:57:59.068506 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:57:59.069270 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | unable to find current IP address of domain default-k8s-diff-port-717075 in network mk-default-k8s-diff-port-717075
	I0127 02:57:59.069302 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | I0127 02:57:59.069227 1119628 retry.go:31] will retry after 373.077565ms: waiting for domain to come up
	I0127 02:57:59.443496 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:57:59.444010 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | unable to find current IP address of domain default-k8s-diff-port-717075 in network mk-default-k8s-diff-port-717075
	I0127 02:57:59.444072 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | I0127 02:57:59.443970 1119628 retry.go:31] will retry after 549.249519ms: waiting for domain to come up
	I0127 02:57:59.994354 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:57:59.994913 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | unable to find current IP address of domain default-k8s-diff-port-717075 in network mk-default-k8s-diff-port-717075
	I0127 02:57:59.994934 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | I0127 02:57:59.994878 1119628 retry.go:31] will retry after 881.699637ms: waiting for domain to come up
	I0127 02:58:00.878067 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:58:00.878624 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | unable to find current IP address of domain default-k8s-diff-port-717075 in network mk-default-k8s-diff-port-717075
	I0127 02:58:00.878673 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | I0127 02:58:00.878587 1119628 retry.go:31] will retry after 770.199498ms: waiting for domain to come up
	I0127 02:58:01.650038 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:58:01.650530 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | unable to find current IP address of domain default-k8s-diff-port-717075 in network mk-default-k8s-diff-port-717075
	I0127 02:58:01.650567 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | I0127 02:58:01.650496 1119628 retry.go:31] will retry after 1.204374174s: waiting for domain to come up
	I0127 02:58:02.856452 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:58:02.857125 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | unable to find current IP address of domain default-k8s-diff-port-717075 in network mk-default-k8s-diff-port-717075
	I0127 02:58:02.857157 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | I0127 02:58:02.857083 1119628 retry.go:31] will retry after 1.758143735s: waiting for domain to come up
	I0127 02:58:04.617443 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:58:04.617918 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | unable to find current IP address of domain default-k8s-diff-port-717075 in network mk-default-k8s-diff-port-717075
	I0127 02:58:04.617949 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | I0127 02:58:04.617873 1119628 retry.go:31] will retry after 2.154242703s: waiting for domain to come up
	I0127 02:58:06.774225 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:58:06.774859 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | unable to find current IP address of domain default-k8s-diff-port-717075 in network mk-default-k8s-diff-port-717075
	I0127 02:58:06.774890 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | I0127 02:58:06.774842 1119628 retry.go:31] will retry after 1.910060209s: waiting for domain to come up
	I0127 02:58:08.686013 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:58:08.686554 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | unable to find current IP address of domain default-k8s-diff-port-717075 in network mk-default-k8s-diff-port-717075
	I0127 02:58:08.686637 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | I0127 02:58:08.686521 1119628 retry.go:31] will retry after 3.506435159s: waiting for domain to come up
	I0127 02:58:12.193996 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:58:12.194483 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | unable to find current IP address of domain default-k8s-diff-port-717075 in network mk-default-k8s-diff-port-717075
	I0127 02:58:12.194514 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | I0127 02:58:12.194421 1119628 retry.go:31] will retry after 3.75547732s: waiting for domain to come up
	I0127 02:58:15.954319 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:58:15.954915 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) found domain IP: 192.168.72.17
	I0127 02:58:15.954957 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has current primary IP address 192.168.72.17 and MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:58:15.954964 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) reserving static IP address...
	I0127 02:58:15.955386 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-717075", mac: "52:54:00:22:da:ad", ip: "192.168.72.17"} in network mk-default-k8s-diff-port-717075: {Iface:virbr4 ExpiryTime:2025-01-27 03:58:09 +0000 UTC Type:0 Mac:52:54:00:22:da:ad Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:default-k8s-diff-port-717075 Clientid:01:52:54:00:22:da:ad}
	I0127 02:58:15.955431 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | skip adding static IP to network mk-default-k8s-diff-port-717075 - found existing host DHCP lease matching {name: "default-k8s-diff-port-717075", mac: "52:54:00:22:da:ad", ip: "192.168.72.17"}
	I0127 02:58:15.955459 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) reserved static IP address 192.168.72.17 for domain default-k8s-diff-port-717075
	I0127 02:58:15.955481 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) waiting for SSH...
	I0127 02:58:15.955495 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | Getting to WaitForSSH function...
	I0127 02:58:15.957520 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:58:15.957797 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:da:ad", ip: ""} in network mk-default-k8s-diff-port-717075: {Iface:virbr4 ExpiryTime:2025-01-27 03:58:09 +0000 UTC Type:0 Mac:52:54:00:22:da:ad Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:default-k8s-diff-port-717075 Clientid:01:52:54:00:22:da:ad}
	I0127 02:58:15.957826 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined IP address 192.168.72.17 and MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:58:15.957924 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | Using SSH client type: external
	I0127 02:58:15.957948 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | Using SSH private key: /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/default-k8s-diff-port-717075/id_rsa (-rw-------)
	I0127 02:58:15.957988 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.17 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/default-k8s-diff-port-717075/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 02:58:15.958006 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | About to run SSH command:
	I0127 02:58:15.958019 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | exit 0
	I0127 02:58:16.085202 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | SSH cmd err, output: <nil>: 
	I0127 02:58:16.085622 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetConfigRaw
	I0127 02:58:16.086376 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetIP
	I0127 02:58:16.088990 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:58:16.089353 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:da:ad", ip: ""} in network mk-default-k8s-diff-port-717075: {Iface:virbr4 ExpiryTime:2025-01-27 03:58:09 +0000 UTC Type:0 Mac:52:54:00:22:da:ad Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:default-k8s-diff-port-717075 Clientid:01:52:54:00:22:da:ad}
	I0127 02:58:16.089392 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined IP address 192.168.72.17 and MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:58:16.089598 1119269 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/default-k8s-diff-port-717075/config.json ...
	I0127 02:58:16.089785 1119269 machine.go:93] provisionDockerMachine start ...
	I0127 02:58:16.089804 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .DriverName
	I0127 02:58:16.090011 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHHostname
	I0127 02:58:16.092235 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:58:16.092582 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:da:ad", ip: ""} in network mk-default-k8s-diff-port-717075: {Iface:virbr4 ExpiryTime:2025-01-27 03:58:09 +0000 UTC Type:0 Mac:52:54:00:22:da:ad Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:default-k8s-diff-port-717075 Clientid:01:52:54:00:22:da:ad}
	I0127 02:58:16.092609 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined IP address 192.168.72.17 and MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:58:16.092740 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHPort
	I0127 02:58:16.092921 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHKeyPath
	I0127 02:58:16.093105 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHKeyPath
	I0127 02:58:16.093276 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHUsername
	I0127 02:58:16.093447 1119269 main.go:141] libmachine: Using SSH client type: native
	I0127 02:58:16.093673 1119269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.17 22 <nil> <nil>}
	I0127 02:58:16.093688 1119269 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 02:58:16.201559 1119269 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 02:58:16.201622 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetMachineName
	I0127 02:58:16.201929 1119269 buildroot.go:166] provisioning hostname "default-k8s-diff-port-717075"
	I0127 02:58:16.201964 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetMachineName
	I0127 02:58:16.202163 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHHostname
	I0127 02:58:16.204721 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:58:16.205098 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:da:ad", ip: ""} in network mk-default-k8s-diff-port-717075: {Iface:virbr4 ExpiryTime:2025-01-27 03:58:09 +0000 UTC Type:0 Mac:52:54:00:22:da:ad Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:default-k8s-diff-port-717075 Clientid:01:52:54:00:22:da:ad}
	I0127 02:58:16.205133 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined IP address 192.168.72.17 and MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:58:16.205305 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHPort
	I0127 02:58:16.205502 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHKeyPath
	I0127 02:58:16.205665 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHKeyPath
	I0127 02:58:16.205794 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHUsername
	I0127 02:58:16.205928 1119269 main.go:141] libmachine: Using SSH client type: native
	I0127 02:58:16.206118 1119269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.17 22 <nil> <nil>}
	I0127 02:58:16.206131 1119269 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-717075 && echo "default-k8s-diff-port-717075" | sudo tee /etc/hostname
	I0127 02:58:16.332690 1119269 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-717075
	
	I0127 02:58:16.332725 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHHostname
	I0127 02:58:16.335752 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:58:16.336243 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:da:ad", ip: ""} in network mk-default-k8s-diff-port-717075: {Iface:virbr4 ExpiryTime:2025-01-27 03:58:09 +0000 UTC Type:0 Mac:52:54:00:22:da:ad Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:default-k8s-diff-port-717075 Clientid:01:52:54:00:22:da:ad}
	I0127 02:58:16.336300 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined IP address 192.168.72.17 and MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:58:16.336431 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHPort
	I0127 02:58:16.336633 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHKeyPath
	I0127 02:58:16.336818 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHKeyPath
	I0127 02:58:16.337023 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHUsername
	I0127 02:58:16.337207 1119269 main.go:141] libmachine: Using SSH client type: native
	I0127 02:58:16.337445 1119269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.17 22 <nil> <nil>}
	I0127 02:58:16.337466 1119269 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-717075' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-717075/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-717075' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 02:58:16.450268 1119269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 02:58:16.450301 1119269 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20316-1057178/.minikube CaCertPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20316-1057178/.minikube}
	I0127 02:58:16.450353 1119269 buildroot.go:174] setting up certificates
	I0127 02:58:16.450372 1119269 provision.go:84] configureAuth start
	I0127 02:58:16.450385 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetMachineName
	I0127 02:58:16.450701 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetIP
	I0127 02:58:16.453348 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:58:16.453757 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:da:ad", ip: ""} in network mk-default-k8s-diff-port-717075: {Iface:virbr4 ExpiryTime:2025-01-27 03:58:09 +0000 UTC Type:0 Mac:52:54:00:22:da:ad Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:default-k8s-diff-port-717075 Clientid:01:52:54:00:22:da:ad}
	I0127 02:58:16.453794 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined IP address 192.168.72.17 and MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:58:16.453922 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHHostname
	I0127 02:58:16.455811 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:58:16.456211 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:da:ad", ip: ""} in network mk-default-k8s-diff-port-717075: {Iface:virbr4 ExpiryTime:2025-01-27 03:58:09 +0000 UTC Type:0 Mac:52:54:00:22:da:ad Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:default-k8s-diff-port-717075 Clientid:01:52:54:00:22:da:ad}
	I0127 02:58:16.456241 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined IP address 192.168.72.17 and MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:58:16.456399 1119269 provision.go:143] copyHostCerts
	I0127 02:58:16.456455 1119269 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.pem, removing ...
	I0127 02:58:16.456482 1119269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.pem
	I0127 02:58:16.456539 1119269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.pem (1078 bytes)
	I0127 02:58:16.456659 1119269 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-1057178/.minikube/cert.pem, removing ...
	I0127 02:58:16.456671 1119269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-1057178/.minikube/cert.pem
	I0127 02:58:16.456694 1119269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20316-1057178/.minikube/cert.pem (1123 bytes)
	I0127 02:58:16.456758 1119269 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-1057178/.minikube/key.pem, removing ...
	I0127 02:58:16.456767 1119269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-1057178/.minikube/key.pem
	I0127 02:58:16.456784 1119269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20316-1057178/.minikube/key.pem (1675 bytes)
	I0127 02:58:16.456841 1119269 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-717075 san=[127.0.0.1 192.168.72.17 default-k8s-diff-port-717075 localhost minikube]
	I0127 02:58:16.595442 1119269 provision.go:177] copyRemoteCerts
	I0127 02:58:16.595510 1119269 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 02:58:16.595538 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHHostname
	I0127 02:58:16.598096 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:58:16.598595 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:da:ad", ip: ""} in network mk-default-k8s-diff-port-717075: {Iface:virbr4 ExpiryTime:2025-01-27 03:58:09 +0000 UTC Type:0 Mac:52:54:00:22:da:ad Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:default-k8s-diff-port-717075 Clientid:01:52:54:00:22:da:ad}
	I0127 02:58:16.598636 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined IP address 192.168.72.17 and MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:58:16.598783 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHPort
	I0127 02:58:16.598983 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHKeyPath
	I0127 02:58:16.599146 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHUsername
	I0127 02:58:16.599259 1119269 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/default-k8s-diff-port-717075/id_rsa Username:docker}
	I0127 02:58:16.684429 1119269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 02:58:16.709624 1119269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0127 02:58:16.733419 1119269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 02:58:16.757854 1119269 provision.go:87] duration metric: took 307.463457ms to configureAuth
	I0127 02:58:16.757893 1119269 buildroot.go:189] setting minikube options for container-runtime
	I0127 02:58:16.758122 1119269 config.go:182] Loaded profile config "default-k8s-diff-port-717075": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 02:58:16.758138 1119269 machine.go:96] duration metric: took 668.341309ms to provisionDockerMachine
	I0127 02:58:16.758148 1119269 start.go:293] postStartSetup for "default-k8s-diff-port-717075" (driver="kvm2")
	I0127 02:58:16.758162 1119269 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 02:58:16.758198 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .DriverName
	I0127 02:58:16.758545 1119269 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 02:58:16.758578 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHHostname
	I0127 02:58:16.761010 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:58:16.761399 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:da:ad", ip: ""} in network mk-default-k8s-diff-port-717075: {Iface:virbr4 ExpiryTime:2025-01-27 03:58:09 +0000 UTC Type:0 Mac:52:54:00:22:da:ad Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:default-k8s-diff-port-717075 Clientid:01:52:54:00:22:da:ad}
	I0127 02:58:16.761439 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined IP address 192.168.72.17 and MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:58:16.761623 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHPort
	I0127 02:58:16.761830 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHKeyPath
	I0127 02:58:16.761997 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHUsername
	I0127 02:58:16.762158 1119269 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/default-k8s-diff-port-717075/id_rsa Username:docker}
	I0127 02:58:16.849607 1119269 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 02:58:16.854215 1119269 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 02:58:16.854251 1119269 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-1057178/.minikube/addons for local assets ...
	I0127 02:58:16.854333 1119269 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-1057178/.minikube/files for local assets ...
	I0127 02:58:16.854445 1119269 filesync.go:149] local asset: /home/jenkins/minikube-integration/20316-1057178/.minikube/files/etc/ssl/certs/10644392.pem -> 10644392.pem in /etc/ssl/certs
	I0127 02:58:16.854575 1119269 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 02:58:16.866388 1119269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/files/etc/ssl/certs/10644392.pem --> /etc/ssl/certs/10644392.pem (1708 bytes)
	I0127 02:58:16.891157 1119269 start.go:296] duration metric: took 132.988693ms for postStartSetup
	I0127 02:58:16.891210 1119269 fix.go:56] duration metric: took 20.057072464s for fixHost
	I0127 02:58:16.891242 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHHostname
	I0127 02:58:16.893863 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:58:16.894250 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:da:ad", ip: ""} in network mk-default-k8s-diff-port-717075: {Iface:virbr4 ExpiryTime:2025-01-27 03:58:09 +0000 UTC Type:0 Mac:52:54:00:22:da:ad Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:default-k8s-diff-port-717075 Clientid:01:52:54:00:22:da:ad}
	I0127 02:58:16.894292 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined IP address 192.168.72.17 and MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:58:16.894450 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHPort
	I0127 02:58:16.894680 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHKeyPath
	I0127 02:58:16.894837 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHKeyPath
	I0127 02:58:16.894979 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHUsername
	I0127 02:58:16.895136 1119269 main.go:141] libmachine: Using SSH client type: native
	I0127 02:58:16.895346 1119269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.17 22 <nil> <nil>}
	I0127 02:58:16.895358 1119269 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 02:58:17.001739 1119269 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737946696.957681638
	
	I0127 02:58:17.001768 1119269 fix.go:216] guest clock: 1737946696.957681638
	I0127 02:58:17.001775 1119269 fix.go:229] Guest: 2025-01-27 02:58:16.957681638 +0000 UTC Remote: 2025-01-27 02:58:16.891216459 +0000 UTC m=+46.937966901 (delta=66.465179ms)
	I0127 02:58:17.001817 1119269 fix.go:200] guest clock delta is within tolerance: 66.465179ms
	I0127 02:58:17.001829 1119269 start.go:83] releasing machines lock for "default-k8s-diff-port-717075", held for 20.167738081s
	I0127 02:58:17.001862 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .DriverName
	I0127 02:58:17.002187 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetIP
	I0127 02:58:17.004912 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:58:17.005389 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:da:ad", ip: ""} in network mk-default-k8s-diff-port-717075: {Iface:virbr4 ExpiryTime:2025-01-27 03:58:09 +0000 UTC Type:0 Mac:52:54:00:22:da:ad Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:default-k8s-diff-port-717075 Clientid:01:52:54:00:22:da:ad}
	I0127 02:58:17.005422 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined IP address 192.168.72.17 and MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:58:17.005544 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .DriverName
	I0127 02:58:17.006178 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .DriverName
	I0127 02:58:17.006368 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .DriverName
	I0127 02:58:17.006457 1119269 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 02:58:17.006515 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHHostname
	I0127 02:58:17.006653 1119269 ssh_runner.go:195] Run: cat /version.json
	I0127 02:58:17.006681 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHHostname
	I0127 02:58:17.009367 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:58:17.009675 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:da:ad", ip: ""} in network mk-default-k8s-diff-port-717075: {Iface:virbr4 ExpiryTime:2025-01-27 03:58:09 +0000 UTC Type:0 Mac:52:54:00:22:da:ad Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:default-k8s-diff-port-717075 Clientid:01:52:54:00:22:da:ad}
	I0127 02:58:17.009700 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:58:17.009730 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined IP address 192.168.72.17 and MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:58:17.009828 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHPort
	I0127 02:58:17.009996 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHKeyPath
	I0127 02:58:17.010157 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:da:ad", ip: ""} in network mk-default-k8s-diff-port-717075: {Iface:virbr4 ExpiryTime:2025-01-27 03:58:09 +0000 UTC Type:0 Mac:52:54:00:22:da:ad Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:default-k8s-diff-port-717075 Clientid:01:52:54:00:22:da:ad}
	I0127 02:58:17.010179 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHUsername
	I0127 02:58:17.010194 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined IP address 192.168.72.17 and MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:58:17.010307 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHPort
	I0127 02:58:17.010360 1119269 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/default-k8s-diff-port-717075/id_rsa Username:docker}
	I0127 02:58:17.010446 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHKeyPath
	I0127 02:58:17.010586 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHUsername
	I0127 02:58:17.010755 1119269 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/default-k8s-diff-port-717075/id_rsa Username:docker}
	I0127 02:58:17.121061 1119269 ssh_runner.go:195] Run: systemctl --version
	I0127 02:58:17.128809 1119269 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 02:58:17.135193 1119269 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 02:58:17.135256 1119269 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 02:58:17.157499 1119269 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 02:58:17.157527 1119269 start.go:495] detecting cgroup driver to use...
	I0127 02:58:17.157616 1119269 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 02:58:17.194414 1119269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 02:58:17.209819 1119269 docker.go:217] disabling cri-docker service (if available) ...
	I0127 02:58:17.209904 1119269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 02:58:17.225005 1119269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 02:58:17.240564 1119269 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 02:58:17.366174 1119269 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 02:58:17.536056 1119269 docker.go:233] disabling docker service ...
	I0127 02:58:17.536135 1119269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 02:58:17.552932 1119269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 02:58:17.568188 1119269 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 02:58:17.688453 1119269 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 02:58:17.820715 1119269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 02:58:17.836896 1119269 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 02:58:17.857389 1119269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 02:58:17.869093 1119269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 02:58:17.887551 1119269 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 02:58:17.887640 1119269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 02:58:17.900610 1119269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 02:58:17.913190 1119269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 02:58:17.925469 1119269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 02:58:17.937206 1119269 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 02:58:17.949560 1119269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 02:58:17.961719 1119269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 02:58:17.974384 1119269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 02:58:17.986565 1119269 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 02:58:17.998119 1119269 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 02:58:17.998218 1119269 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 02:58:18.015332 1119269 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 02:58:18.027703 1119269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 02:58:18.161983 1119269 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 02:58:18.192975 1119269 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0127 02:58:18.193063 1119269 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 02:58:18.198294 1119269 retry.go:31] will retry after 761.356809ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0127 02:58:18.959998 1119269 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 02:58:18.966066 1119269 start.go:563] Will wait 60s for crictl version
	I0127 02:58:18.966135 1119269 ssh_runner.go:195] Run: which crictl
	I0127 02:58:18.970440 1119269 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 02:58:19.013764 1119269 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0127 02:58:19.013843 1119269 ssh_runner.go:195] Run: containerd --version
	I0127 02:58:19.049298 1119269 ssh_runner.go:195] Run: containerd --version
	I0127 02:58:19.076930 1119269 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	I0127 02:58:19.078437 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetIP
	I0127 02:58:19.082230 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:58:19.082622 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:da:ad", ip: ""} in network mk-default-k8s-diff-port-717075: {Iface:virbr4 ExpiryTime:2025-01-27 03:58:09 +0000 UTC Type:0 Mac:52:54:00:22:da:ad Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:default-k8s-diff-port-717075 Clientid:01:52:54:00:22:da:ad}
	I0127 02:58:19.082657 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined IP address 192.168.72.17 and MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 02:58:19.082952 1119269 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0127 02:58:19.087854 1119269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 02:58:19.105891 1119269 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-717075 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-717
075 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.17 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 02:58:19.106132 1119269 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 02:58:19.106221 1119269 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 02:58:19.142039 1119269 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 02:58:19.142066 1119269 containerd.go:534] Images already preloaded, skipping extraction
	I0127 02:58:19.142123 1119269 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 02:58:19.192142 1119269 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 02:58:19.192177 1119269 cache_images.go:84] Images are preloaded, skipping loading
	I0127 02:58:19.192188 1119269 kubeadm.go:934] updating node { 192.168.72.17 8444 v1.32.1 containerd true true} ...
	I0127 02:58:19.192347 1119269 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-717075 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-717075 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 02:58:19.192425 1119269 ssh_runner.go:195] Run: sudo crictl info
	I0127 02:58:19.229075 1119269 cni.go:84] Creating CNI manager for ""
	I0127 02:58:19.229107 1119269 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 02:58:19.229120 1119269 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 02:58:19.229151 1119269 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.17 APIServerPort:8444 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-717075 NodeName:default-k8s-diff-port-717075 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.17"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.17 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikub
e/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 02:58:19.229278 1119269 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.17
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-717075"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.17"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.17"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 02:58:19.229347 1119269 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 02:58:19.241538 1119269 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 02:58:19.241616 1119269 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 02:58:19.252453 1119269 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (333 bytes)
	I0127 02:58:19.270808 1119269 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 02:58:19.292350 1119269 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2321 bytes)
	I0127 02:58:19.311980 1119269 ssh_runner.go:195] Run: grep 192.168.72.17	control-plane.minikube.internal$ /etc/hosts
	I0127 02:58:19.316153 1119269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.17	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 02:58:19.330096 1119269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 02:58:19.451488 1119269 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 02:58:19.471614 1119269 certs.go:68] Setting up /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/default-k8s-diff-port-717075 for IP: 192.168.72.17
	I0127 02:58:19.471638 1119269 certs.go:194] generating shared ca certs ...
	I0127 02:58:19.471655 1119269 certs.go:226] acquiring lock for ca certs: {Name:mk567acc23cbe907605c03a2ec03c8e4859e8343 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:58:19.471852 1119269 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.key
	I0127 02:58:19.471917 1119269 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/proxy-client-ca.key
	I0127 02:58:19.471933 1119269 certs.go:256] generating profile certs ...
	I0127 02:58:19.472080 1119269 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/default-k8s-diff-port-717075/client.key
	I0127 02:58:19.472185 1119269 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/default-k8s-diff-port-717075/apiserver.key.26da8593
	I0127 02:58:19.472251 1119269 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/default-k8s-diff-port-717075/proxy-client.key
	I0127 02:58:19.472393 1119269 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/1064439.pem (1338 bytes)
	W0127 02:58:19.472453 1119269 certs.go:480] ignoring /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/1064439_empty.pem, impossibly tiny 0 bytes
	I0127 02:58:19.472469 1119269 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 02:58:19.472511 1119269 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem (1078 bytes)
	I0127 02:58:19.472543 1119269 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/cert.pem (1123 bytes)
	I0127 02:58:19.472577 1119269 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/key.pem (1675 bytes)
	I0127 02:58:19.472646 1119269 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/files/etc/ssl/certs/10644392.pem (1708 bytes)
	I0127 02:58:19.473325 1119269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 02:58:19.506851 1119269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 02:58:19.546684 1119269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 02:58:19.579354 1119269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 02:58:19.618771 1119269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/default-k8s-diff-port-717075/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0127 02:58:19.654975 1119269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/default-k8s-diff-port-717075/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 02:58:19.688677 1119269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/default-k8s-diff-port-717075/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 02:58:19.731798 1119269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/default-k8s-diff-port-717075/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 02:58:19.763095 1119269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 02:58:19.797415 1119269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/1064439.pem --> /usr/share/ca-certificates/1064439.pem (1338 bytes)
	I0127 02:58:19.825448 1119269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/files/etc/ssl/certs/10644392.pem --> /usr/share/ca-certificates/10644392.pem (1708 bytes)
	I0127 02:58:19.852114 1119269 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 02:58:19.872098 1119269 ssh_runner.go:195] Run: openssl version
	I0127 02:58:19.879129 1119269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1064439.pem && ln -fs /usr/share/ca-certificates/1064439.pem /etc/ssl/certs/1064439.pem"
	I0127 02:58:19.892157 1119269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1064439.pem
	I0127 02:58:19.897414 1119269 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 01:51 /usr/share/ca-certificates/1064439.pem
	I0127 02:58:19.897489 1119269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1064439.pem
	I0127 02:58:19.904430 1119269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1064439.pem /etc/ssl/certs/51391683.0"
	I0127 02:58:19.921048 1119269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10644392.pem && ln -fs /usr/share/ca-certificates/10644392.pem /etc/ssl/certs/10644392.pem"
	I0127 02:58:19.937707 1119269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10644392.pem
	I0127 02:58:19.943374 1119269 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 01:51 /usr/share/ca-certificates/10644392.pem
	I0127 02:58:19.943451 1119269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10644392.pem
	I0127 02:58:19.950359 1119269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10644392.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 02:58:19.965571 1119269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 02:58:19.977724 1119269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:58:19.982460 1119269 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 01:43 /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:58:19.982520 1119269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:58:19.990832 1119269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 02:58:20.004120 1119269 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 02:58:20.009426 1119269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 02:58:20.016098 1119269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 02:58:20.022627 1119269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 02:58:20.029157 1119269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 02:58:20.037010 1119269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 02:58:20.044098 1119269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 02:58:20.051049 1119269 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-717075 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-717075
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.17 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 02:58:20.051185 1119269 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0127 02:58:20.051262 1119269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 02:58:20.103142 1119269 cri.go:89] found id: "a0ca7b5a46b4499b6a6624140966f74684cb379958b5db4fb54c3305b2fd16aa"
	I0127 02:58:20.103170 1119269 cri.go:89] found id: "7a54e169d67f817f5d588c9955989fff8a68b80c55cbbe23c9e0e30e2f3a28db"
	I0127 02:58:20.103174 1119269 cri.go:89] found id: "613a53bb325f4ffa48ca73004169e71ace7723479d66c205817ad340522683ab"
	I0127 02:58:20.103178 1119269 cri.go:89] found id: "d37318c0ad3a4531ef73cd08b33860101f76eee3fd7ada587164194666b58139"
	I0127 02:58:20.103181 1119269 cri.go:89] found id: "ea0c2f4447ba91ea0376d43d25855188032417d16233fdad86c9593a611d850b"
	I0127 02:58:20.103184 1119269 cri.go:89] found id: "9dee51351f8a1e1b39ce050a85b40d0c86a315e58b8a206698d6ab762a2d52a6"
	I0127 02:58:20.103187 1119269 cri.go:89] found id: "23a854d96fbe3cb99fe2185a4467dbaa8dcd6b0b8394ad826c553a246ff6ab7a"
	I0127 02:58:20.103189 1119269 cri.go:89] found id: ""
	I0127 02:58:20.103243 1119269 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0127 02:58:20.120447 1119269 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-27T02:58:20Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0127 02:58:20.120565 1119269 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 02:58:20.131824 1119269 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 02:58:20.131852 1119269 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 02:58:20.131914 1119269 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 02:58:20.144189 1119269 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 02:58:20.145597 1119269 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-717075" does not appear in /home/jenkins/minikube-integration/20316-1057178/kubeconfig
	I0127 02:58:20.146415 1119269 kubeconfig.go:62] /home/jenkins/minikube-integration/20316-1057178/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-717075" cluster setting kubeconfig missing "default-k8s-diff-port-717075" context setting]
	I0127 02:58:20.147632 1119269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-1057178/kubeconfig: {Name:mke4bd9fc891569e5d6830fdf173fa5043f6c0a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:58:20.150020 1119269 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 02:58:20.161309 1119269 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.17
	I0127 02:58:20.161351 1119269 kubeadm.go:1160] stopping kube-system containers ...
	I0127 02:58:20.161369 1119269 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0127 02:58:20.161431 1119269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 02:58:20.203426 1119269 cri.go:89] found id: "a0ca7b5a46b4499b6a6624140966f74684cb379958b5db4fb54c3305b2fd16aa"
	I0127 02:58:20.203456 1119269 cri.go:89] found id: "7a54e169d67f817f5d588c9955989fff8a68b80c55cbbe23c9e0e30e2f3a28db"
	I0127 02:58:20.203462 1119269 cri.go:89] found id: "613a53bb325f4ffa48ca73004169e71ace7723479d66c205817ad340522683ab"
	I0127 02:58:20.203466 1119269 cri.go:89] found id: "d37318c0ad3a4531ef73cd08b33860101f76eee3fd7ada587164194666b58139"
	I0127 02:58:20.203471 1119269 cri.go:89] found id: "ea0c2f4447ba91ea0376d43d25855188032417d16233fdad86c9593a611d850b"
	I0127 02:58:20.203482 1119269 cri.go:89] found id: "9dee51351f8a1e1b39ce050a85b40d0c86a315e58b8a206698d6ab762a2d52a6"
	I0127 02:58:20.203487 1119269 cri.go:89] found id: "23a854d96fbe3cb99fe2185a4467dbaa8dcd6b0b8394ad826c553a246ff6ab7a"
	I0127 02:58:20.203489 1119269 cri.go:89] found id: ""
	I0127 02:58:20.203494 1119269 cri.go:252] Stopping containers: [a0ca7b5a46b4499b6a6624140966f74684cb379958b5db4fb54c3305b2fd16aa 7a54e169d67f817f5d588c9955989fff8a68b80c55cbbe23c9e0e30e2f3a28db 613a53bb325f4ffa48ca73004169e71ace7723479d66c205817ad340522683ab d37318c0ad3a4531ef73cd08b33860101f76eee3fd7ada587164194666b58139 ea0c2f4447ba91ea0376d43d25855188032417d16233fdad86c9593a611d850b 9dee51351f8a1e1b39ce050a85b40d0c86a315e58b8a206698d6ab762a2d52a6 23a854d96fbe3cb99fe2185a4467dbaa8dcd6b0b8394ad826c553a246ff6ab7a]
	I0127 02:58:20.203552 1119269 ssh_runner.go:195] Run: which crictl
	I0127 02:58:20.208089 1119269 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 a0ca7b5a46b4499b6a6624140966f74684cb379958b5db4fb54c3305b2fd16aa 7a54e169d67f817f5d588c9955989fff8a68b80c55cbbe23c9e0e30e2f3a28db 613a53bb325f4ffa48ca73004169e71ace7723479d66c205817ad340522683ab d37318c0ad3a4531ef73cd08b33860101f76eee3fd7ada587164194666b58139 ea0c2f4447ba91ea0376d43d25855188032417d16233fdad86c9593a611d850b 9dee51351f8a1e1b39ce050a85b40d0c86a315e58b8a206698d6ab762a2d52a6 23a854d96fbe3cb99fe2185a4467dbaa8dcd6b0b8394ad826c553a246ff6ab7a
	I0127 02:58:20.253110 1119269 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 02:58:20.271879 1119269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 02:58:20.283043 1119269 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 02:58:20.283071 1119269 kubeadm.go:157] found existing configuration files:
	
	I0127 02:58:20.283127 1119269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0127 02:58:20.293364 1119269 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 02:58:20.293431 1119269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 02:58:20.307197 1119269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0127 02:58:20.319996 1119269 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 02:58:20.320078 1119269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 02:58:20.330027 1119269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0127 02:58:20.340068 1119269 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 02:58:20.340150 1119269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 02:58:20.354625 1119269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0127 02:58:20.368428 1119269 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 02:58:20.368506 1119269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 02:58:20.379647 1119269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 02:58:20.390702 1119269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:58:20.539648 1119269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:58:21.631075 1119269 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.091379327s)
	I0127 02:58:21.631115 1119269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:58:21.850547 1119269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:58:21.929773 1119269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:58:22.014974 1119269 api_server.go:52] waiting for apiserver process to appear ...
	I0127 02:58:22.015078 1119269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 02:58:22.516077 1119269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 02:58:23.015259 1119269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 02:58:23.038185 1119269 api_server.go:72] duration metric: took 1.023212855s to wait for apiserver process to appear ...
	I0127 02:58:23.038220 1119269 api_server.go:88] waiting for apiserver healthz status ...
	I0127 02:58:23.038258 1119269 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8444/healthz ...
	I0127 02:58:23.038818 1119269 api_server.go:269] stopped: https://192.168.72.17:8444/healthz: Get "https://192.168.72.17:8444/healthz": dial tcp 192.168.72.17:8444: connect: connection refused
	I0127 02:58:23.538406 1119269 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8444/healthz ...
	I0127 02:58:26.360504 1119269 api_server.go:279] https://192.168.72.17:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 02:58:26.360547 1119269 api_server.go:103] status: https://192.168.72.17:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 02:58:26.360567 1119269 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8444/healthz ...
	I0127 02:58:26.415535 1119269 api_server.go:279] https://192.168.72.17:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 02:58:26.415575 1119269 api_server.go:103] status: https://192.168.72.17:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 02:58:26.538893 1119269 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8444/healthz ...
	I0127 02:58:26.544743 1119269 api_server.go:279] https://192.168.72.17:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 02:58:26.544771 1119269 api_server.go:103] status: https://192.168.72.17:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 02:58:27.038732 1119269 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8444/healthz ...
	I0127 02:58:27.046518 1119269 api_server.go:279] https://192.168.72.17:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 02:58:27.046551 1119269 api_server.go:103] status: https://192.168.72.17:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 02:58:27.539244 1119269 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8444/healthz ...
	I0127 02:58:27.547345 1119269 api_server.go:279] https://192.168.72.17:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 02:58:27.547393 1119269 api_server.go:103] status: https://192.168.72.17:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 02:58:28.038563 1119269 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8444/healthz ...
	I0127 02:58:28.044049 1119269 api_server.go:279] https://192.168.72.17:8444/healthz returned 200:
	ok
	I0127 02:58:28.056091 1119269 api_server.go:141] control plane version: v1.32.1
	I0127 02:58:28.056123 1119269 api_server.go:131] duration metric: took 5.017895428s to wait for apiserver health ...
	I0127 02:58:28.056133 1119269 cni.go:84] Creating CNI manager for ""
	I0127 02:58:28.056140 1119269 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 02:58:28.058179 1119269 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 02:58:28.059444 1119269 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 02:58:28.078983 1119269 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 02:58:28.123487 1119269 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 02:58:28.135162 1119269 system_pods.go:59] 8 kube-system pods found
	I0127 02:58:28.135217 1119269 system_pods.go:61] "coredns-668d6bf9bc-jtgng" [91193cc5-b2b7-496e-9e04-fe93bb8cb6be] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 02:58:28.135234 1119269 system_pods.go:61] "etcd-default-k8s-diff-port-717075" [0c55d3e4-a331-41ef-88f2-1bb56c7aaf64] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 02:58:28.135244 1119269 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-717075" [cb5b065f-bfa7-48d3-a6a7-4ed3ab4cc718] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 02:58:28.135256 1119269 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-717075" [f94f9dbb-33e4-4c99-92d1-81032076da58] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 02:58:28.135276 1119269 system_pods.go:61] "kube-proxy-skg5d" [ec671b0e-12a5-4fb2-9d29-b25dfcea78f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0127 02:58:28.135288 1119269 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-717075" [1e2b9d3c-78db-4de2-a7fb-1ab52f7223b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 02:58:28.135302 1119269 system_pods.go:61] "metrics-server-f79f97bbb-8skhl" [da54699d-3063-498d-92b7-19b950fcdc9d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 02:58:28.135311 1119269 system_pods.go:61] "storage-provisioner" [84bf2a38-4a79-4702-9aae-f34616c69f18] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 02:58:28.135324 1119269 system_pods.go:74] duration metric: took 11.798908ms to wait for pod list to return data ...
	I0127 02:58:28.135338 1119269 node_conditions.go:102] verifying NodePressure condition ...
	I0127 02:58:28.142958 1119269 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 02:58:28.142993 1119269 node_conditions.go:123] node cpu capacity is 2
	I0127 02:58:28.143008 1119269 node_conditions.go:105] duration metric: took 7.66263ms to run NodePressure ...
	I0127 02:58:28.143037 1119269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:58:28.533010 1119269 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0127 02:58:28.548344 1119269 kubeadm.go:739] kubelet initialised
	I0127 02:58:28.548376 1119269 kubeadm.go:740] duration metric: took 15.332332ms waiting for restarted kubelet to initialise ...
	I0127 02:58:28.548391 1119269 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 02:58:28.558831 1119269 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-jtgng" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:30.568265 1119269 pod_ready.go:103] pod "coredns-668d6bf9bc-jtgng" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:32.565936 1119269 pod_ready.go:93] pod "coredns-668d6bf9bc-jtgng" in "kube-system" namespace has status "Ready":"True"
	I0127 02:58:32.565962 1119269 pod_ready.go:82] duration metric: took 4.007102573s for pod "coredns-668d6bf9bc-jtgng" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:32.565973 1119269 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:34.573967 1119269 pod_ready.go:103] pod "etcd-default-k8s-diff-port-717075" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:37.073141 1119269 pod_ready.go:103] pod "etcd-default-k8s-diff-port-717075" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:39.074180 1119269 pod_ready.go:103] pod "etcd-default-k8s-diff-port-717075" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:41.572323 1119269 pod_ready.go:93] pod "etcd-default-k8s-diff-port-717075" in "kube-system" namespace has status "Ready":"True"
	I0127 02:58:41.572354 1119269 pod_ready.go:82] duration metric: took 9.006373288s for pod "etcd-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:41.572368 1119269 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:41.577344 1119269 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-717075" in "kube-system" namespace has status "Ready":"True"
	I0127 02:58:41.577368 1119269 pod_ready.go:82] duration metric: took 4.992171ms for pod "kube-apiserver-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:41.577379 1119269 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:41.582631 1119269 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-717075" in "kube-system" namespace has status "Ready":"True"
	I0127 02:58:41.582660 1119269 pod_ready.go:82] duration metric: took 5.273125ms for pod "kube-controller-manager-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:41.582672 1119269 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-skg5d" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:41.587538 1119269 pod_ready.go:93] pod "kube-proxy-skg5d" in "kube-system" namespace has status "Ready":"True"
	I0127 02:58:41.587566 1119269 pod_ready.go:82] duration metric: took 4.885847ms for pod "kube-proxy-skg5d" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:41.587582 1119269 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:41.592027 1119269 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-717075" in "kube-system" namespace has status "Ready":"True"
	I0127 02:58:41.592053 1119269 pod_ready.go:82] duration metric: took 4.460523ms for pod "kube-scheduler-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:41.592066 1119269 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:43.599750 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:46.101746 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:48.599060 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:51.100655 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:53.599427 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:55.600211 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:58.100053 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:00.600455 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:03.102008 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:05.599432 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:07.600004 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:10.099140 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:12.099297 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:14.099886 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:16.100126 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:18.100813 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:20.600140 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:23.101337 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:25.600096 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:27.601644 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:30.101984 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:32.103267 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:34.598549 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:36.599182 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:38.601699 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:41.099956 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:43.598418 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:45.599221 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:48.099109 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:50.100783 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:52.101301 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:54.598530 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:56.599681 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:59.100038 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:01.101071 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:03.598503 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:05.601454 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:08.099141 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:10.099869 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:12.101997 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:14.602109 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:17.098931 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:19.099432 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:21.100637 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:23.598664 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:25.599309 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:27.600302 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:30.099644 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:32.101865 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:34.598546 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:37.098875 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:39.100050 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:41.598711 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:43.599030 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:45.601857 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:48.098938 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:50.099378 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:52.099980 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:54.598809 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:57.098836 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:59.099659 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:01.601600 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:04.098912 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:06.100447 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:08.599176 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:10.600125 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:13.099686 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:15.100510 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:17.100898 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:19.598466 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:21.599022 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:24.098793 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:26.099724 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:28.101704 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:30.599517 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:32.600988 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:35.106995 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:37.599940 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:39.619848 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:42.099484 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:44.100513 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:46.598286 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:48.599276 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:51.101073 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:53.101207 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:55.602425 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:58.100297 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:00.101750 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:02.600452 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:05.099225 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:07.099594 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:09.600572 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:11.613207 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:14.098783 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:16.101837 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:18.600416 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:20.601947 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:22.605621 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:25.099839 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:27.100451 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:29.599652 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:31.600099 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:33.600177 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:35.602346 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:38.100810 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:40.598596 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:41.593185 1119269 pod_ready.go:82] duration metric: took 4m0.0010842s for pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace to be "Ready" ...
	E0127 03:02:41.593221 1119269 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 03:02:41.593251 1119269 pod_ready.go:39] duration metric: took 4m13.044846596s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:02:41.593292 1119269 kubeadm.go:597] duration metric: took 4m21.461431723s to restartPrimaryControlPlane
	W0127 03:02:41.593372 1119269 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 03:02:41.593408 1119269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 03:02:43.620030 1119269 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.026590178s)
	I0127 03:02:43.620115 1119269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 03:02:43.639142 1119269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 03:02:43.651292 1119269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 03:02:43.661667 1119269 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 03:02:43.661687 1119269 kubeadm.go:157] found existing configuration files:
	
	I0127 03:02:43.661733 1119269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0127 03:02:43.672110 1119269 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 03:02:43.672165 1119269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 03:02:43.683718 1119269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0127 03:02:43.693914 1119269 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 03:02:43.693983 1119269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 03:02:43.704250 1119269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0127 03:02:43.714202 1119269 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 03:02:43.714283 1119269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 03:02:43.724775 1119269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0127 03:02:43.734789 1119269 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 03:02:43.734857 1119269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 03:02:43.746079 1119269 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 03:02:43.925921 1119269 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 03:02:52.671028 1119269 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 03:02:52.671099 1119269 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 03:02:52.671206 1119269 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 03:02:52.671380 1119269 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 03:02:52.671539 1119269 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 03:02:52.671639 1119269 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 03:02:52.673297 1119269 out.go:235]   - Generating certificates and keys ...
	I0127 03:02:52.673383 1119269 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 03:02:52.673474 1119269 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 03:02:52.673554 1119269 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 03:02:52.673609 1119269 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 03:02:52.673670 1119269 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 03:02:52.673716 1119269 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 03:02:52.673767 1119269 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 03:02:52.673816 1119269 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 03:02:52.673876 1119269 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 03:02:52.673954 1119269 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 03:02:52.673999 1119269 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 03:02:52.674047 1119269 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 03:02:52.674108 1119269 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 03:02:52.674187 1119269 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 03:02:52.674263 1119269 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 03:02:52.674321 1119269 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 03:02:52.674367 1119269 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 03:02:52.674447 1119269 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 03:02:52.674507 1119269 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 03:02:52.675997 1119269 out.go:235]   - Booting up control plane ...
	I0127 03:02:52.676130 1119269 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 03:02:52.676280 1119269 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 03:02:52.676377 1119269 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 03:02:52.676517 1119269 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 03:02:52.676652 1119269 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 03:02:52.676719 1119269 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 03:02:52.676922 1119269 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 03:02:52.677082 1119269 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 03:02:52.677173 1119269 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001864315s
	I0127 03:02:52.677287 1119269 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 03:02:52.677368 1119269 kubeadm.go:310] [api-check] The API server is healthy after 5.001344194s
	I0127 03:02:52.677511 1119269 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 03:02:52.677653 1119269 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 03:02:52.677715 1119269 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 03:02:52.677867 1119269 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-717075 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 03:02:52.677952 1119269 kubeadm.go:310] [bootstrap-token] Using token: dptef9.zgjhm0hnxmak7ndp
	I0127 03:02:52.679531 1119269 out.go:235]   - Configuring RBAC rules ...
	I0127 03:02:52.679681 1119269 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 03:02:52.679793 1119269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 03:02:52.680000 1119269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 03:02:52.680151 1119269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 03:02:52.680307 1119269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 03:02:52.680415 1119269 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 03:02:52.680548 1119269 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 03:02:52.680611 1119269 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 03:02:52.680680 1119269 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 03:02:52.680690 1119269 kubeadm.go:310] 
	I0127 03:02:52.680769 1119269 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 03:02:52.680779 1119269 kubeadm.go:310] 
	I0127 03:02:52.680875 1119269 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 03:02:52.680886 1119269 kubeadm.go:310] 
	I0127 03:02:52.680922 1119269 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 03:02:52.681024 1119269 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 03:02:52.681096 1119269 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 03:02:52.681106 1119269 kubeadm.go:310] 
	I0127 03:02:52.681192 1119269 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 03:02:52.681208 1119269 kubeadm.go:310] 
	I0127 03:02:52.681275 1119269 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 03:02:52.681289 1119269 kubeadm.go:310] 
	I0127 03:02:52.681363 1119269 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 03:02:52.681491 1119269 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 03:02:52.681562 1119269 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 03:02:52.681568 1119269 kubeadm.go:310] 
	I0127 03:02:52.681636 1119269 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 03:02:52.681749 1119269 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 03:02:52.681759 1119269 kubeadm.go:310] 
	I0127 03:02:52.681896 1119269 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token dptef9.zgjhm0hnxmak7ndp \
	I0127 03:02:52.682053 1119269 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0bba8d4e4b3afb129d2d18e2e045cd48b3419c300ae73ce15b73c31a6c21b1ba \
	I0127 03:02:52.682085 1119269 kubeadm.go:310] 	--control-plane 
	I0127 03:02:52.682091 1119269 kubeadm.go:310] 
	I0127 03:02:52.682242 1119269 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 03:02:52.682259 1119269 kubeadm.go:310] 
	I0127 03:02:52.682381 1119269 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token dptef9.zgjhm0hnxmak7ndp \
	I0127 03:02:52.682532 1119269 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0bba8d4e4b3afb129d2d18e2e045cd48b3419c300ae73ce15b73c31a6c21b1ba 
	I0127 03:02:52.682561 1119269 cni.go:84] Creating CNI manager for ""
	I0127 03:02:52.682574 1119269 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 03:02:52.684226 1119269 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 03:02:52.685352 1119269 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 03:02:52.697398 1119269 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 03:02:52.719046 1119269 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 03:02:52.719104 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:52.719145 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-717075 minikube.k8s.io/updated_at=2025_01_27T03_02_52_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95 minikube.k8s.io/name=default-k8s-diff-port-717075 minikube.k8s.io/primary=true
	I0127 03:02:52.761799 1119269 ops.go:34] apiserver oom_adj: -16
	I0127 03:02:52.952929 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:53.453841 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:53.953656 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:54.453137 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:54.953750 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:55.453823 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:55.953104 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:56.453840 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:56.953721 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:57.072043 1119269 kubeadm.go:1113] duration metric: took 4.352992678s to wait for elevateKubeSystemPrivileges
	I0127 03:02:57.072116 1119269 kubeadm.go:394] duration metric: took 4m37.021077009s to StartCluster
	I0127 03:02:57.072145 1119269 settings.go:142] acquiring lock: {Name:mkfac79776d8549aa482287d1af528efdec15d32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:57.072271 1119269 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20316-1057178/kubeconfig
	I0127 03:02:57.073904 1119269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-1057178/kubeconfig: {Name:mke4bd9fc891569e5d6830fdf173fa5043f6c0a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:57.074254 1119269 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.17 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 03:02:57.074373 1119269 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 03:02:57.074508 1119269 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-717075"
	I0127 03:02:57.074520 1119269 config.go:182] Loaded profile config "default-k8s-diff-port-717075": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 03:02:57.074535 1119269 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-717075"
	W0127 03:02:57.074544 1119269 addons.go:247] addon storage-provisioner should already be in state true
	I0127 03:02:57.074540 1119269 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-717075"
	I0127 03:02:57.074579 1119269 host.go:66] Checking if "default-k8s-diff-port-717075" exists ...
	I0127 03:02:57.074576 1119269 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-717075"
	I0127 03:02:57.074572 1119269 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-717075"
	I0127 03:02:57.074588 1119269 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-717075"
	I0127 03:02:57.074605 1119269 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-717075"
	I0127 03:02:57.074614 1119269 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-717075"
	W0127 03:02:57.074616 1119269 addons.go:247] addon dashboard should already be in state true
	W0127 03:02:57.074623 1119269 addons.go:247] addon metrics-server should already be in state true
	I0127 03:02:57.074653 1119269 host.go:66] Checking if "default-k8s-diff-port-717075" exists ...
	I0127 03:02:57.074659 1119269 host.go:66] Checking if "default-k8s-diff-port-717075" exists ...
	I0127 03:02:57.075056 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:57.075068 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:57.075068 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:57.075121 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:57.075123 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:57.075163 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:57.075267 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:57.075353 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:57.081008 1119269 out.go:177] * Verifying Kubernetes components...
	I0127 03:02:57.082885 1119269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:02:57.094206 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34785
	I0127 03:02:57.094931 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:57.095746 1119269 main.go:141] libmachine: Using API Version  1
	I0127 03:02:57.095766 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:57.095843 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37149
	I0127 03:02:57.095963 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40171
	I0127 03:02:57.096377 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:57.096485 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:57.096649 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:57.097010 1119269 main.go:141] libmachine: Using API Version  1
	I0127 03:02:57.097039 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:57.097172 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:57.097228 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:57.097627 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:57.097906 1119269 main.go:141] libmachine: Using API Version  1
	I0127 03:02:57.097919 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:57.098237 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:57.098286 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:57.098455 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40437
	I0127 03:02:57.098935 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:57.099556 1119269 main.go:141] libmachine: Using API Version  1
	I0127 03:02:57.099578 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:57.099797 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:57.100439 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:57.100480 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:57.100698 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:57.100896 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetState
	I0127 03:02:57.105155 1119269 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-717075"
	W0127 03:02:57.105188 1119269 addons.go:247] addon default-storageclass should already be in state true
	I0127 03:02:57.105221 1119269 host.go:66] Checking if "default-k8s-diff-port-717075" exists ...
	I0127 03:02:57.105609 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:57.105668 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:57.121375 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36179
	I0127 03:02:57.121658 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39597
	I0127 03:02:57.121901 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:57.122123 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:57.122486 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39513
	I0127 03:02:57.122504 1119269 main.go:141] libmachine: Using API Version  1
	I0127 03:02:57.122523 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:57.122758 1119269 main.go:141] libmachine: Using API Version  1
	I0127 03:02:57.122778 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:57.122813 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35417
	I0127 03:02:57.122851 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:57.122923 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:57.123171 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetState
	I0127 03:02:57.123241 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:57.123868 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:57.123978 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetState
	I0127 03:02:57.123990 1119269 main.go:141] libmachine: Using API Version  1
	I0127 03:02:57.124007 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:57.124368 1119269 main.go:141] libmachine: Using API Version  1
	I0127 03:02:57.124387 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:57.124452 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:57.124681 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetState
	I0127 03:02:57.124733 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:57.125300 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:57.125347 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:57.126534 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .DriverName
	I0127 03:02:57.127123 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .DriverName
	I0127 03:02:57.127415 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .DriverName
	I0127 03:02:57.128921 1119269 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 03:02:57.128930 1119269 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 03:02:57.128931 1119269 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 03:02:57.130374 1119269 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 03:02:57.130393 1119269 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:02:57.130411 1119269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 03:02:57.130431 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHHostname
	I0127 03:02:57.130395 1119269 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 03:02:57.130396 1119269 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 03:02:57.130621 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHHostname
	I0127 03:02:57.132516 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 03:02:57.132532 1119269 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 03:02:57.132547 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHHostname
	I0127 03:02:57.135860 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 03:02:57.135912 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 03:02:57.136120 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 03:02:57.136644 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:da:ad", ip: ""} in network mk-default-k8s-diff-port-717075: {Iface:virbr4 ExpiryTime:2025-01-27 03:58:09 +0000 UTC Type:0 Mac:52:54:00:22:da:ad Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:default-k8s-diff-port-717075 Clientid:01:52:54:00:22:da:ad}
	I0127 03:02:57.136669 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined IP address 192.168.72.17 and MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 03:02:57.136702 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:da:ad", ip: ""} in network mk-default-k8s-diff-port-717075: {Iface:virbr4 ExpiryTime:2025-01-27 03:58:09 +0000 UTC Type:0 Mac:52:54:00:22:da:ad Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:default-k8s-diff-port-717075 Clientid:01:52:54:00:22:da:ad}
	I0127 03:02:57.136736 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined IP address 192.168.72.17 and MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 03:02:57.136747 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHPort
	I0127 03:02:57.136809 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:da:ad", ip: ""} in network mk-default-k8s-diff-port-717075: {Iface:virbr4 ExpiryTime:2025-01-27 03:58:09 +0000 UTC Type:0 Mac:52:54:00:22:da:ad Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:default-k8s-diff-port-717075 Clientid:01:52:54:00:22:da:ad}
	I0127 03:02:57.137008 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined IP address 192.168.72.17 and MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 03:02:57.136938 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHKeyPath
	I0127 03:02:57.137108 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHPort
	I0127 03:02:57.137179 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHUsername
	I0127 03:02:57.137309 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHKeyPath
	I0127 03:02:57.137376 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHPort
	I0127 03:02:57.137403 1119269 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/default-k8s-diff-port-717075/id_rsa Username:docker}
	I0127 03:02:57.137589 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHKeyPath
	I0127 03:02:57.137621 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHUsername
	I0127 03:02:57.137794 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHUsername
	I0127 03:02:57.138008 1119269 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/default-k8s-diff-port-717075/id_rsa Username:docker}
	I0127 03:02:57.138010 1119269 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/default-k8s-diff-port-717075/id_rsa Username:docker}
	I0127 03:02:57.152787 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35355
	I0127 03:02:57.153399 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:57.153967 1119269 main.go:141] libmachine: Using API Version  1
	I0127 03:02:57.154002 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:57.154377 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:57.154584 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetState
	I0127 03:02:57.156381 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .DriverName
	I0127 03:02:57.156603 1119269 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 03:02:57.156624 1119269 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 03:02:57.156649 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHHostname
	I0127 03:02:57.159499 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 03:02:57.159944 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:da:ad", ip: ""} in network mk-default-k8s-diff-port-717075: {Iface:virbr4 ExpiryTime:2025-01-27 03:58:09 +0000 UTC Type:0 Mac:52:54:00:22:da:ad Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:default-k8s-diff-port-717075 Clientid:01:52:54:00:22:da:ad}
	I0127 03:02:57.160261 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined IP address 192.168.72.17 and MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 03:02:57.160520 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHPort
	I0127 03:02:57.160684 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHKeyPath
	I0127 03:02:57.163248 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHUsername
	I0127 03:02:57.164348 1119269 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/default-k8s-diff-port-717075/id_rsa Username:docker}
	I0127 03:02:57.378051 1119269 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 03:02:57.433542 1119269 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-717075" to be "Ready" ...
	I0127 03:02:57.474874 1119269 node_ready.go:49] node "default-k8s-diff-port-717075" has status "Ready":"True"
	I0127 03:02:57.474911 1119269 node_ready.go:38] duration metric: took 41.327465ms for node "default-k8s-diff-port-717075" to be "Ready" ...
	I0127 03:02:57.474926 1119269 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:02:57.483255 1119269 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-htglq" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:57.519688 1119269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:02:57.542506 1119269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 03:02:57.549073 1119269 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 03:02:57.549102 1119269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 03:02:57.584535 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 03:02:57.584568 1119269 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 03:02:57.655673 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 03:02:57.655711 1119269 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 03:02:57.690996 1119269 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 03:02:57.691028 1119269 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 03:02:57.822313 1119269 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 03:02:57.822349 1119269 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 03:02:57.834363 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 03:02:57.834392 1119269 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 03:02:57.911077 1119269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 03:02:58.019919 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 03:02:58.019953 1119269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 03:02:58.212111 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 03:02:58.212145 1119269 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 03:02:58.309353 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 03:02:58.309381 1119269 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 03:02:58.378582 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 03:02:58.378611 1119269 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 03:02:58.444731 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 03:02:58.444762 1119269 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 03:02:58.506703 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 03:02:58.506745 1119269 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 03:02:58.584131 1119269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 03:02:58.850852 1119269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.331110115s)
	I0127 03:02:58.850948 1119269 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:58.850973 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
	I0127 03:02:58.850970 1119269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.308397522s)
	I0127 03:02:58.851017 1119269 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:58.851054 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
	I0127 03:02:58.851306 1119269 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:58.851328 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:58.851341 1119269 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:58.851348 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
	I0127 03:02:58.851426 1119269 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:58.851444 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:58.851465 1119269 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:58.851476 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
	I0127 03:02:58.851634 1119269 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:58.851650 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:58.851693 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | Closing plugin on server side
	I0127 03:02:58.851740 1119269 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:58.851762 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | Closing plugin on server side
	I0127 03:02:58.851765 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:58.886972 1119269 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:58.887006 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
	I0127 03:02:58.887346 1119269 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:58.887369 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:59.219464 1119269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.308329693s)
	I0127 03:02:59.219531 1119269 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:59.219552 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
	I0127 03:02:59.219946 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | Closing plugin on server side
	I0127 03:02:59.220003 1119269 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:59.220024 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:59.220045 1119269 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:59.220059 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
	I0127 03:02:59.220303 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | Closing plugin on server side
	I0127 03:02:59.220340 1119269 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:59.220349 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:59.220364 1119269 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-717075"
	I0127 03:02:59.493877 1119269 pod_ready.go:93] pod "coredns-668d6bf9bc-htglq" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:59.493919 1119269 pod_ready.go:82] duration metric: took 2.010631788s for pod "coredns-668d6bf9bc-htglq" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:59.493932 1119269 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-pwz9n" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:00.135755 1119269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.551568283s)
	I0127 03:03:00.135819 1119269 main.go:141] libmachine: Making call to close driver server
	I0127 03:03:00.135831 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
	I0127 03:03:00.136153 1119269 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:03:00.136171 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:03:00.136179 1119269 main.go:141] libmachine: Making call to close driver server
	I0127 03:03:00.136187 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
	I0127 03:03:00.136181 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | Closing plugin on server side
	I0127 03:03:00.136446 1119269 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:03:00.136459 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:03:00.137984 1119269 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-717075 addons enable metrics-server
	
	I0127 03:03:00.139476 1119269 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 03:03:00.140933 1119269 addons.go:514] duration metric: took 3.06657827s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 03:03:01.501546 1119269 pod_ready.go:103] pod "coredns-668d6bf9bc-pwz9n" in "kube-system" namespace has status "Ready":"False"
	I0127 03:03:04.000116 1119269 pod_ready.go:103] pod "coredns-668d6bf9bc-pwz9n" in "kube-system" namespace has status "Ready":"False"
	I0127 03:03:05.002068 1119269 pod_ready.go:93] pod "coredns-668d6bf9bc-pwz9n" in "kube-system" namespace has status "Ready":"True"
	I0127 03:03:05.002134 1119269 pod_ready.go:82] duration metric: took 5.508188953s for pod "coredns-668d6bf9bc-pwz9n" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:05.002149 1119269 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:05.007136 1119269 pod_ready.go:93] pod "etcd-default-k8s-diff-port-717075" in "kube-system" namespace has status "Ready":"True"
	I0127 03:03:05.007163 1119269 pod_ready.go:82] duration metric: took 5.003743ms for pod "etcd-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:05.007173 1119269 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:06.013821 1119269 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-717075" in "kube-system" namespace has status "Ready":"True"
	I0127 03:03:06.013847 1119269 pod_ready.go:82] duration metric: took 1.006667196s for pod "kube-apiserver-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:06.013860 1119269 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:06.018661 1119269 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-717075" in "kube-system" namespace has status "Ready":"True"
	I0127 03:03:06.018683 1119269 pod_ready.go:82] duration metric: took 4.814763ms for pod "kube-controller-manager-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:06.018694 1119269 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nlkhv" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:06.022482 1119269 pod_ready.go:93] pod "kube-proxy-nlkhv" in "kube-system" namespace has status "Ready":"True"
	I0127 03:03:06.022500 1119269 pod_ready.go:82] duration metric: took 3.79842ms for pod "kube-proxy-nlkhv" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:06.022512 1119269 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:06.197960 1119269 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-717075" in "kube-system" namespace has status "Ready":"True"
	I0127 03:03:06.197986 1119269 pod_ready.go:82] duration metric: took 175.467759ms for pod "kube-scheduler-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:06.197995 1119269 pod_ready.go:39] duration metric: took 8.723057571s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:03:06.198012 1119269 api_server.go:52] waiting for apiserver process to appear ...
	I0127 03:03:06.198073 1119269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:03:06.215210 1119269 api_server.go:72] duration metric: took 9.140900628s to wait for apiserver process to appear ...
	I0127 03:03:06.215249 1119269 api_server.go:88] waiting for apiserver healthz status ...
	I0127 03:03:06.215273 1119269 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8444/healthz ...
	I0127 03:03:06.219951 1119269 api_server.go:279] https://192.168.72.17:8444/healthz returned 200:
	ok
	I0127 03:03:06.220901 1119269 api_server.go:141] control plane version: v1.32.1
	I0127 03:03:06.220922 1119269 api_server.go:131] duration metric: took 5.666132ms to wait for apiserver health ...
	I0127 03:03:06.220929 1119269 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 03:03:06.402128 1119269 system_pods.go:59] 9 kube-system pods found
	I0127 03:03:06.402165 1119269 system_pods.go:61] "coredns-668d6bf9bc-htglq" [2d4500a2-7bc9-4c25-af55-3c20257065c2] Running
	I0127 03:03:06.402172 1119269 system_pods.go:61] "coredns-668d6bf9bc-pwz9n" [cf6b7c7c-59eb-4901-88ba-a6e4556ddd4c] Running
	I0127 03:03:06.402177 1119269 system_pods.go:61] "etcd-default-k8s-diff-port-717075" [50fac615-6926-4023-8467-fa0c3fec39b2] Running
	I0127 03:03:06.402181 1119269 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-717075" [f86307a0-5994-4178-af8a-43613ed2bd63] Running
	I0127 03:03:06.402186 1119269 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-717075" [543f1b9a-da5a-4963-adc0-3bb2c88f2de0] Running
	I0127 03:03:06.402191 1119269 system_pods.go:61] "kube-proxy-nlkhv" [57c52d4f-937f-4fc8-98dd-9aa0531f8d17] Running
	I0127 03:03:06.402197 1119269 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-717075" [bb54f953-7c1f-4ce8-a590-7d029dcfea24] Running
	I0127 03:03:06.402205 1119269 system_pods.go:61] "metrics-server-f79f97bbb-fthnn" [fb8e4d08-fb1f-49a5-8984-44e975174502] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 03:03:06.402211 1119269 system_pods.go:61] "storage-provisioner" [0a7c6b15-4ec5-46cf-8f6e-d98c292af196] Running
	I0127 03:03:06.402225 1119269 system_pods.go:74] duration metric: took 181.288367ms to wait for pod list to return data ...
	I0127 03:03:06.402236 1119269 default_sa.go:34] waiting for default service account to be created ...
	I0127 03:03:06.598976 1119269 default_sa.go:45] found service account: "default"
	I0127 03:03:06.599007 1119269 default_sa.go:55] duration metric: took 196.76041ms for default service account to be created ...
	I0127 03:03:06.599017 1119269 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 03:03:06.802139 1119269 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p default-k8s-diff-port-717075 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-717075 -n default-k8s-diff-port-717075
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-717075 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-717075 logs -n 25: (1.48523429s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p old-k8s-version-760492                              | old-k8s-version-760492       | jenkins | v1.35.0 | 27 Jan 25 02:56 UTC | 27 Jan 25 02:57 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-887091                  | no-preload-887091            | jenkins | v1.35.0 | 27 Jan 25 02:57 UTC | 27 Jan 25 02:57 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-887091                                   | no-preload-887091            | jenkins | v1.35.0 | 27 Jan 25 02:57 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-264552                 | embed-certs-264552           | jenkins | v1.35.0 | 27 Jan 25 02:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-717075       | default-k8s-diff-port-717075 | jenkins | v1.35.0 | 27 Jan 25 02:57 UTC | 27 Jan 25 02:57 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-264552                                  | embed-certs-264552           | jenkins | v1.35.0 | 27 Jan 25 02:57 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-717075 | jenkins | v1.35.0 | 27 Jan 25 02:57 UTC |                     |
	|         | default-k8s-diff-port-717075                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-760492             | old-k8s-version-760492       | jenkins | v1.35.0 | 27 Jan 25 02:57 UTC | 27 Jan 25 02:57 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-760492                              | old-k8s-version-760492       | jenkins | v1.35.0 | 27 Jan 25 02:57 UTC | 27 Jan 25 03:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | old-k8s-version-760492 image                           | old-k8s-version-760492       | jenkins | v1.35.0 | 27 Jan 25 03:01 UTC | 27 Jan 25 03:01 UTC |
	|         | list --format=json                                     |                              |         |         |                     |                     |
	| pause   | -p old-k8s-version-760492                              | old-k8s-version-760492       | jenkins | v1.35.0 | 27 Jan 25 03:01 UTC | 27 Jan 25 03:01 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p old-k8s-version-760492                              | old-k8s-version-760492       | jenkins | v1.35.0 | 27 Jan 25 03:01 UTC | 27 Jan 25 03:01 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-760492                              | old-k8s-version-760492       | jenkins | v1.35.0 | 27 Jan 25 03:01 UTC | 27 Jan 25 03:01 UTC |
	| delete  | -p old-k8s-version-760492                              | old-k8s-version-760492       | jenkins | v1.35.0 | 27 Jan 25 03:01 UTC | 27 Jan 25 03:01 UTC |
	| start   | -p newest-cni-642127 --memory=2200 --alsologtostderr   | newest-cni-642127            | jenkins | v1.35.0 | 27 Jan 25 03:01 UTC | 27 Jan 25 03:01 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=containerd          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-642127             | newest-cni-642127            | jenkins | v1.35.0 | 27 Jan 25 03:01 UTC | 27 Jan 25 03:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-642127                                   | newest-cni-642127            | jenkins | v1.35.0 | 27 Jan 25 03:01 UTC | 27 Jan 25 03:01 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-642127                  | newest-cni-642127            | jenkins | v1.35.0 | 27 Jan 25 03:02 UTC | 27 Jan 25 03:02 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-642127 --memory=2200 --alsologtostderr   | newest-cni-642127            | jenkins | v1.35.0 | 27 Jan 25 03:02 UTC | 27 Jan 25 03:02 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=containerd          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-642127 image list                           | newest-cni-642127            | jenkins | v1.35.0 | 27 Jan 25 03:02 UTC | 27 Jan 25 03:02 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-642127                                   | newest-cni-642127            | jenkins | v1.35.0 | 27 Jan 25 03:02 UTC | 27 Jan 25 03:02 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-642127                                   | newest-cni-642127            | jenkins | v1.35.0 | 27 Jan 25 03:02 UTC | 27 Jan 25 03:02 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-642127                                   | newest-cni-642127            | jenkins | v1.35.0 | 27 Jan 25 03:02 UTC | 27 Jan 25 03:02 UTC |
	| delete  | -p newest-cni-642127                                   | newest-cni-642127            | jenkins | v1.35.0 | 27 Jan 25 03:02 UTC | 27 Jan 25 03:02 UTC |
	| delete  | -p no-preload-887091                                   | no-preload-887091            | jenkins | v1.35.0 | 27 Jan 25 03:23 UTC | 27 Jan 25 03:23 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 03:02:00
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 03:02:00.237835 1121411 out.go:345] Setting OutFile to fd 1 ...
	I0127 03:02:00.238128 1121411 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 03:02:00.238140 1121411 out.go:358] Setting ErrFile to fd 2...
	I0127 03:02:00.238146 1121411 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 03:02:00.238345 1121411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-1057178/.minikube/bin
	I0127 03:02:00.239045 1121411 out.go:352] Setting JSON to false
	I0127 03:02:00.240327 1121411 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":13467,"bootTime":1737933453,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 03:02:00.240474 1121411 start.go:139] virtualization: kvm guest
	I0127 03:02:00.242533 1121411 out.go:177] * [newest-cni-642127] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 03:02:00.244184 1121411 out.go:177]   - MINIKUBE_LOCATION=20316
	I0127 03:02:00.244247 1121411 notify.go:220] Checking for updates...
	I0127 03:02:00.246478 1121411 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 03:02:00.247855 1121411 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20316-1057178/kubeconfig
	I0127 03:02:00.249125 1121411 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-1057178/.minikube
	I0127 03:02:00.250346 1121411 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 03:02:00.251585 1121411 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 03:02:00.253406 1121411 config.go:182] Loaded profile config "newest-cni-642127": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 03:02:00.254032 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:00.254107 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:00.270414 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33505
	I0127 03:02:00.270862 1121411 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:00.271405 1121411 main.go:141] libmachine: Using API Version  1
	I0127 03:02:00.271428 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:00.271776 1121411 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:00.271945 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:00.272173 1121411 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 03:02:00.272461 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:00.272496 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:00.287317 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36083
	I0127 03:02:00.287836 1121411 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:00.288298 1121411 main.go:141] libmachine: Using API Version  1
	I0127 03:02:00.288340 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:00.288708 1121411 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:00.288885 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:00.325767 1121411 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 03:02:00.327047 1121411 start.go:297] selected driver: kvm2
	I0127 03:02:00.327060 1121411 start.go:901] validating driver "kvm2" against &{Name:newest-cni-642127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-642127 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenA
ddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 03:02:00.327183 1121411 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 03:02:00.327982 1121411 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 03:02:00.328064 1121411 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20316-1057178/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 03:02:00.343178 1121411 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 03:02:00.343639 1121411 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 03:02:00.343677 1121411 cni.go:84] Creating CNI manager for ""
	I0127 03:02:00.343730 1121411 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 03:02:00.343763 1121411 start.go:340] cluster config:
	{Name:newest-cni-642127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-642127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 03:02:00.343883 1121411 iso.go:125] acquiring lock: {Name:mkd30bc9d11f9170e89ad95ce7ba25fa6d1e04f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 03:02:00.345590 1121411 out.go:177] * Starting "newest-cni-642127" primary control-plane node in "newest-cni-642127" cluster
	I0127 03:02:00.346774 1121411 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 03:02:00.346814 1121411 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
	I0127 03:02:00.346828 1121411 cache.go:56] Caching tarball of preloaded images
	I0127 03:02:00.346908 1121411 preload.go:172] Found /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 03:02:00.346919 1121411 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
	I0127 03:02:00.347008 1121411 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127/config.json ...
	I0127 03:02:00.347215 1121411 start.go:360] acquireMachinesLock for newest-cni-642127: {Name:mka8dc154c517d64837d06e2f84f8bddd0b82c58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 03:02:00.347258 1121411 start.go:364] duration metric: took 23.213µs to acquireMachinesLock for "newest-cni-642127"
	I0127 03:02:00.347273 1121411 start.go:96] Skipping create...Using existing machine configuration
	I0127 03:02:00.347278 1121411 fix.go:54] fixHost starting: 
	I0127 03:02:00.347525 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:00.347569 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:00.362339 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37759
	I0127 03:02:00.362837 1121411 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:00.363413 1121411 main.go:141] libmachine: Using API Version  1
	I0127 03:02:00.363435 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:00.363738 1121411 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:00.363908 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:00.364065 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetState
	I0127 03:02:00.365643 1121411 fix.go:112] recreateIfNeeded on newest-cni-642127: state=Stopped err=<nil>
	I0127 03:02:00.365669 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	W0127 03:02:00.366076 1121411 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 03:02:00.368560 1121411 out.go:177] * Restarting existing kvm2 VM for "newest-cni-642127" ...
	I0127 03:01:59.553947 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:01.029438 1119007 pod_ready.go:82] duration metric: took 4m0.000430308s for pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace to be "Ready" ...
	E0127 03:02:01.029463 1119007 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 03:02:01.029492 1119007 pod_ready.go:39] duration metric: took 4m12.545085543s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:02:01.029521 1119007 kubeadm.go:597] duration metric: took 4m20.2724454s to restartPrimaryControlPlane
	W0127 03:02:01.029578 1119007 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 03:02:01.029603 1119007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 03:02:03.004910 1119007 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.9752757s)
	I0127 03:02:03.005026 1119007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 03:02:03.022327 1119007 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 03:02:03.033433 1119007 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 03:02:03.043716 1119007 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 03:02:03.043751 1119007 kubeadm.go:157] found existing configuration files:
	
	I0127 03:02:03.043807 1119007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 03:02:03.053848 1119007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 03:02:03.053913 1119007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 03:02:03.064618 1119007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 03:02:03.075259 1119007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 03:02:03.075327 1119007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 03:02:03.087088 1119007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 03:02:03.098909 1119007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 03:02:03.098975 1119007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 03:02:03.110053 1119007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 03:02:03.119864 1119007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 03:02:03.119938 1119007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 03:02:03.130987 1119007 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 03:02:03.185348 1119007 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 03:02:03.185417 1119007 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 03:02:03.314698 1119007 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 03:02:03.314881 1119007 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 03:02:03.315043 1119007 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 03:02:03.324401 1119007 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 03:02:03.326164 1119007 out.go:235]   - Generating certificates and keys ...
	I0127 03:02:03.326268 1119007 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 03:02:03.326359 1119007 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 03:02:03.326477 1119007 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 03:02:03.326572 1119007 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 03:02:03.326663 1119007 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 03:02:03.326738 1119007 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 03:02:03.326859 1119007 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 03:02:03.327073 1119007 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 03:02:03.327208 1119007 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 03:02:03.327338 1119007 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 03:02:03.327408 1119007 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 03:02:03.327502 1119007 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 03:02:03.521123 1119007 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 03:02:03.756848 1119007 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 03:02:03.911089 1119007 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 03:02:04.122010 1119007 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 03:02:04.383085 1119007 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 03:02:04.383614 1119007 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 03:02:04.386205 1119007 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 03:02:00.791431 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:02.793532 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:00.101750 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:02.600452 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:00.369945 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Start
	I0127 03:02:00.370121 1121411 main.go:141] libmachine: (newest-cni-642127) starting domain...
	I0127 03:02:00.370143 1121411 main.go:141] libmachine: (newest-cni-642127) ensuring networks are active...
	I0127 03:02:00.370872 1121411 main.go:141] libmachine: (newest-cni-642127) Ensuring network default is active
	I0127 03:02:00.371180 1121411 main.go:141] libmachine: (newest-cni-642127) Ensuring network mk-newest-cni-642127 is active
	I0127 03:02:00.371540 1121411 main.go:141] libmachine: (newest-cni-642127) getting domain XML...
	I0127 03:02:00.372193 1121411 main.go:141] libmachine: (newest-cni-642127) creating domain...
	I0127 03:02:01.655632 1121411 main.go:141] libmachine: (newest-cni-642127) waiting for IP...
	I0127 03:02:01.656638 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:01.657157 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:01.657251 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:01.657139 1121446 retry.go:31] will retry after 277.784658ms: waiting for domain to come up
	I0127 03:02:01.936660 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:01.937240 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:01.937271 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:01.937207 1121446 retry.go:31] will retry after 238.163617ms: waiting for domain to come up
	I0127 03:02:02.176792 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:02.177474 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:02.177544 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:02.177436 1121446 retry.go:31] will retry after 380.939356ms: waiting for domain to come up
	I0127 03:02:02.560097 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:02.560666 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:02.560700 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:02.560618 1121446 retry.go:31] will retry after 505.552982ms: waiting for domain to come up
	I0127 03:02:03.067443 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:03.067968 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:03.068040 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:03.067965 1121446 retry.go:31] will retry after 727.427105ms: waiting for domain to come up
	I0127 03:02:03.797031 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:03.797596 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:03.797621 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:03.797562 1121446 retry.go:31] will retry after 647.611718ms: waiting for domain to come up
	I0127 03:02:04.447043 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:04.447523 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:04.447556 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:04.447508 1121446 retry.go:31] will retry after 984.747883ms: waiting for domain to come up
	I0127 03:02:04.388044 1119007 out.go:235]   - Booting up control plane ...
	I0127 03:02:04.388157 1119007 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 03:02:04.388265 1119007 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 03:02:04.388373 1119007 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 03:02:04.409379 1119007 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 03:02:04.416389 1119007 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 03:02:04.416479 1119007 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 03:02:04.571487 1119007 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 03:02:04.571690 1119007 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 03:02:05.072916 1119007 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.574288ms
	I0127 03:02:05.073090 1119007 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 03:02:05.292102 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:07.292399 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:09.792796 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:05.099225 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:07.099594 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:09.600572 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:05.434383 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:05.434961 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:05.434994 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:05.434926 1121446 retry.go:31] will retry after 1.239188819s: waiting for domain to come up
	I0127 03:02:06.675638 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:06.676209 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:06.676244 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:06.676172 1121446 retry.go:31] will retry after 1.489275436s: waiting for domain to come up
	I0127 03:02:08.167884 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:08.168365 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:08.168402 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:08.168327 1121446 retry.go:31] will retry after 1.739982698s: waiting for domain to come up
	I0127 03:02:09.910362 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:09.910871 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:09.910964 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:09.910871 1121446 retry.go:31] will retry after 2.79669233s: waiting for domain to come up
	I0127 03:02:10.574512 1119007 kubeadm.go:310] [api-check] The API server is healthy after 5.501444049s
	I0127 03:02:10.590265 1119007 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 03:02:10.612200 1119007 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 03:02:10.650305 1119007 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 03:02:10.650585 1119007 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-887091 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 03:02:10.661688 1119007 kubeadm.go:310] [bootstrap-token] Using token: 25alvo.7xrmg7nh4q5v903n
	I0127 03:02:10.663119 1119007 out.go:235]   - Configuring RBAC rules ...
	I0127 03:02:10.663280 1119007 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 03:02:10.671888 1119007 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 03:02:10.685310 1119007 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 03:02:10.690214 1119007 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 03:02:10.694363 1119007 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 03:02:10.698959 1119007 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 03:02:10.982964 1119007 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 03:02:11.430752 1119007 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 03:02:11.982446 1119007 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 03:02:11.984681 1119007 kubeadm.go:310] 
	I0127 03:02:11.984836 1119007 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 03:02:11.984859 1119007 kubeadm.go:310] 
	I0127 03:02:11.984989 1119007 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 03:02:11.985010 1119007 kubeadm.go:310] 
	I0127 03:02:11.985048 1119007 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 03:02:11.985139 1119007 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 03:02:11.985214 1119007 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 03:02:11.985223 1119007 kubeadm.go:310] 
	I0127 03:02:11.985308 1119007 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 03:02:11.985320 1119007 kubeadm.go:310] 
	I0127 03:02:11.985386 1119007 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 03:02:11.985394 1119007 kubeadm.go:310] 
	I0127 03:02:11.985466 1119007 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 03:02:11.985573 1119007 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 03:02:11.985666 1119007 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 03:02:11.985676 1119007 kubeadm.go:310] 
	I0127 03:02:11.985787 1119007 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 03:02:11.985893 1119007 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 03:02:11.985903 1119007 kubeadm.go:310] 
	I0127 03:02:11.986015 1119007 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 25alvo.7xrmg7nh4q5v903n \
	I0127 03:02:11.986154 1119007 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0bba8d4e4b3afb129d2d18e2e045cd48b3419c300ae73ce15b73c31a6c21b1ba \
	I0127 03:02:11.986187 1119007 kubeadm.go:310] 	--control-plane 
	I0127 03:02:11.986194 1119007 kubeadm.go:310] 
	I0127 03:02:11.986302 1119007 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 03:02:11.986313 1119007 kubeadm.go:310] 
	I0127 03:02:11.986421 1119007 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 25alvo.7xrmg7nh4q5v903n \
	I0127 03:02:11.986559 1119007 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0bba8d4e4b3afb129d2d18e2e045cd48b3419c300ae73ce15b73c31a6c21b1ba 
	I0127 03:02:11.988046 1119007 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 03:02:11.988085 1119007 cni.go:84] Creating CNI manager for ""
	I0127 03:02:11.988096 1119007 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 03:02:11.989984 1119007 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 03:02:11.991565 1119007 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 03:02:12.008152 1119007 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 03:02:12.031285 1119007 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 03:02:12.031368 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:12.031415 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-887091 minikube.k8s.io/updated_at=2025_01_27T03_02_12_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95 minikube.k8s.io/name=no-preload-887091 minikube.k8s.io/primary=true
	I0127 03:02:12.301916 1119007 ops.go:34] apiserver oom_adj: -16
	I0127 03:02:12.302079 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:12.802985 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:11.795142 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:14.292215 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:11.613207 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:14.098783 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:12.710060 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:12.710698 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:12.710737 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:12.710630 1121446 retry.go:31] will retry after 2.899766509s: waiting for domain to come up
	I0127 03:02:13.302566 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:13.802370 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:14.302582 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:14.802350 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:15.302355 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:15.802132 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:15.926758 1119007 kubeadm.go:1113] duration metric: took 3.895467932s to wait for elevateKubeSystemPrivileges
	I0127 03:02:15.926808 1119007 kubeadm.go:394] duration metric: took 4m35.245756492s to StartCluster
	I0127 03:02:15.926834 1119007 settings.go:142] acquiring lock: {Name:mkfac79776d8549aa482287d1af528efdec15d32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:15.926944 1119007 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20316-1057178/kubeconfig
	I0127 03:02:15.928428 1119007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-1057178/kubeconfig: {Name:mke4bd9fc891569e5d6830fdf173fa5043f6c0a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:15.928677 1119007 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.201 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 03:02:15.928795 1119007 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 03:02:15.928913 1119007 config.go:182] Loaded profile config "no-preload-887091": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 03:02:15.928932 1119007 addons.go:69] Setting metrics-server=true in profile "no-preload-887091"
	I0127 03:02:15.928966 1119007 addons.go:238] Setting addon metrics-server=true in "no-preload-887091"
	I0127 03:02:15.928977 1119007 addons.go:69] Setting dashboard=true in profile "no-preload-887091"
	W0127 03:02:15.928985 1119007 addons.go:247] addon metrics-server should already be in state true
	I0127 03:02:15.928991 1119007 addons.go:238] Setting addon dashboard=true in "no-preload-887091"
	I0127 03:02:15.928918 1119007 addons.go:69] Setting storage-provisioner=true in profile "no-preload-887091"
	I0127 03:02:15.929020 1119007 host.go:66] Checking if "no-preload-887091" exists ...
	I0127 03:02:15.929025 1119007 addons.go:238] Setting addon storage-provisioner=true in "no-preload-887091"
	W0127 03:02:15.929036 1119007 addons.go:247] addon storage-provisioner should already be in state true
	I0127 03:02:15.928961 1119007 addons.go:69] Setting default-storageclass=true in profile "no-preload-887091"
	I0127 03:02:15.929073 1119007 host.go:66] Checking if "no-preload-887091" exists ...
	I0127 03:02:15.929093 1119007 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-887091"
	W0127 03:02:15.928999 1119007 addons.go:247] addon dashboard should already be in state true
	I0127 03:02:15.929175 1119007 host.go:66] Checking if "no-preload-887091" exists ...
	I0127 03:02:15.929496 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:15.929496 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:15.929544 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:15.929557 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:15.929547 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:15.929584 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:15.929499 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:15.929692 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:15.930306 1119007 out.go:177] * Verifying Kubernetes components...
	I0127 03:02:15.931877 1119007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:02:15.952533 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34429
	I0127 03:02:15.952549 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44595
	I0127 03:02:15.952581 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46601
	I0127 03:02:15.952721 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39019
	I0127 03:02:15.954529 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:15.954547 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:15.954808 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:15.955205 1119007 main.go:141] libmachine: Using API Version  1
	I0127 03:02:15.955229 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:15.955233 1119007 main.go:141] libmachine: Using API Version  1
	I0127 03:02:15.955253 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:15.955313 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:15.955413 1119007 main.go:141] libmachine: Using API Version  1
	I0127 03:02:15.955437 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:15.955766 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:15.955849 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:15.955886 1119007 main.go:141] libmachine: Using API Version  1
	I0127 03:02:15.955947 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:15.956424 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:15.956463 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:15.956469 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:15.956507 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:15.956724 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:15.956927 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetState
	I0127 03:02:15.957100 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:15.957708 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:15.957746 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:15.960884 1119007 addons.go:238] Setting addon default-storageclass=true in "no-preload-887091"
	W0127 03:02:15.960910 1119007 addons.go:247] addon default-storageclass should already be in state true
	I0127 03:02:15.960960 1119007 host.go:66] Checking if "no-preload-887091" exists ...
	I0127 03:02:15.961323 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:15.961366 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:15.977560 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43979
	I0127 03:02:15.978028 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:15.978173 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42633
	I0127 03:02:15.978517 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46571
	I0127 03:02:15.978693 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:15.978872 1119007 main.go:141] libmachine: Using API Version  1
	I0127 03:02:15.978901 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:15.979226 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:15.979298 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:15.979562 1119007 main.go:141] libmachine: Using API Version  1
	I0127 03:02:15.979576 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetState
	I0127 03:02:15.979593 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:15.979923 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:15.980113 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetState
	I0127 03:02:15.980289 1119007 main.go:141] libmachine: Using API Version  1
	I0127 03:02:15.980304 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:15.980894 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:15.981251 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetState
	I0127 03:02:15.981811 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
	I0127 03:02:15.982385 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
	I0127 03:02:15.983016 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
	I0127 03:02:15.983162 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42601
	I0127 03:02:15.983756 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:15.983837 1119007 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 03:02:15.984185 1119007 main.go:141] libmachine: Using API Version  1
	I0127 03:02:15.984202 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:15.984606 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:15.985117 1119007 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 03:02:15.985204 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:15.985237 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:15.985253 1119007 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:02:15.985273 1119007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 03:02:15.985297 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
	I0127 03:02:15.985367 1119007 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 03:02:15.986458 1119007 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 03:02:15.986480 1119007 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 03:02:15.986546 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
	I0127 03:02:15.987599 1119007 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 03:02:15.988812 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 03:02:15.988933 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 03:02:15.989273 1119007 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 03:02:15.989471 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
	I0127 03:02:15.989502 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 03:02:15.989571 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
	I0127 03:02:15.989716 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
	I0127 03:02:15.989884 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
	I0127 03:02:15.990033 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
	I0127 03:02:15.990172 1119007 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/no-preload-887091/id_rsa Username:docker}
	I0127 03:02:15.990858 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 03:02:15.991445 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
	I0127 03:02:15.991468 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 03:02:15.991628 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
	I0127 03:02:15.991828 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
	I0127 03:02:15.992248 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
	I0127 03:02:15.992428 1119007 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/no-preload-887091/id_rsa Username:docker}
	I0127 03:02:15.993703 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 03:02:15.994218 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
	I0127 03:02:15.994244 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 03:02:15.994557 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
	I0127 03:02:15.994742 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
	I0127 03:02:15.994902 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
	I0127 03:02:15.995042 1119007 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/no-preload-887091/id_rsa Username:docker}
	I0127 03:02:16.004890 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45539
	I0127 03:02:16.005324 1119007 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:16.005841 1119007 main.go:141] libmachine: Using API Version  1
	I0127 03:02:16.005861 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:16.006249 1119007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:16.006454 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetState
	I0127 03:02:16.008475 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
	I0127 03:02:16.008706 1119007 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 03:02:16.008719 1119007 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 03:02:16.008733 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
	I0127 03:02:16.011722 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 03:02:16.012561 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
	I0127 03:02:16.012637 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
	I0127 03:02:16.012663 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
	I0127 03:02:16.012777 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
	I0127 03:02:16.012973 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
	I0127 03:02:16.013155 1119007 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/no-preload-887091/id_rsa Username:docker}
	I0127 03:02:16.171165 1119007 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 03:02:16.193562 1119007 node_ready.go:35] waiting up to 6m0s for node "no-preload-887091" to be "Ready" ...
	I0127 03:02:16.246946 1119007 node_ready.go:49] node "no-preload-887091" has status "Ready":"True"
	I0127 03:02:16.246978 1119007 node_ready.go:38] duration metric: took 53.383421ms for node "no-preload-887091" to be "Ready" ...
	I0127 03:02:16.246992 1119007 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:02:16.274293 1119007 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-86j6q" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:16.274621 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 03:02:16.274647 1119007 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 03:02:16.305232 1119007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 03:02:16.327479 1119007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:02:16.328118 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 03:02:16.328136 1119007 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 03:02:16.428329 1119007 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 03:02:16.428364 1119007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 03:02:16.466201 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 03:02:16.466236 1119007 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 03:02:16.599271 1119007 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 03:02:16.599315 1119007 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 03:02:16.638608 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 03:02:16.638637 1119007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 03:02:16.828108 1119007 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 03:02:16.828150 1119007 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 03:02:16.838645 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 03:02:16.838676 1119007 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 03:02:16.984773 1119007 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:16.984808 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
	I0127 03:02:16.985269 1119007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:16.985286 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:16.985295 1119007 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:16.985302 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
	I0127 03:02:16.985629 1119007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:16.985649 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:17.004424 1119007 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:17.004447 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
	I0127 03:02:17.004789 1119007 main.go:141] libmachine: (no-preload-887091) DBG | Closing plugin on server side
	I0127 03:02:17.004799 1119007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:17.004830 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:17.011294 1119007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 03:02:17.011605 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 03:02:17.011624 1119007 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 03:02:17.109457 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 03:02:17.109494 1119007 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 03:02:17.218037 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 03:02:17.218071 1119007 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 03:02:17.272264 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 03:02:17.272299 1119007 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 03:02:17.346698 1119007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 03:02:17.903867 1119007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.57633993s)
	I0127 03:02:17.903940 1119007 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:17.903958 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
	I0127 03:02:17.904299 1119007 main.go:141] libmachine: (no-preload-887091) DBG | Closing plugin on server side
	I0127 03:02:17.904382 1119007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:17.904399 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:17.904412 1119007 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:17.904418 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
	I0127 03:02:17.904680 1119007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:17.904702 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:17.904715 1119007 main.go:141] libmachine: (no-preload-887091) DBG | Closing plugin on server side
	I0127 03:02:18.291876 1119007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.280535535s)
	I0127 03:02:18.291939 1119007 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:18.291962 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
	I0127 03:02:18.292296 1119007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:18.292315 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:18.292323 1119007 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:18.292329 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
	I0127 03:02:18.293045 1119007 main.go:141] libmachine: (no-preload-887091) DBG | Closing plugin on server side
	I0127 03:02:18.293120 1119007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:18.293147 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:18.293165 1119007 addons.go:479] Verifying addon metrics-server=true in "no-preload-887091"
	I0127 03:02:18.308148 1119007 pod_ready.go:103] pod "coredns-668d6bf9bc-86j6q" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:19.202588 1119007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.855830221s)
	I0127 03:02:19.202668 1119007 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:19.202685 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
	I0127 03:02:19.202996 1119007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:19.203014 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:19.203031 1119007 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:19.203046 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
	I0127 03:02:19.203365 1119007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:19.203408 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:19.205207 1119007 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-887091 addons enable metrics-server
	
	I0127 03:02:19.206884 1119007 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0127 03:02:16.293451 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:18.793149 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:19.785753 1119263 pod_ready.go:82] duration metric: took 4m0.001003583s for pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace to be "Ready" ...
	E0127 03:02:19.785781 1119263 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 03:02:19.785801 1119263 pod_ready.go:39] duration metric: took 4m12.565302655s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:02:19.785832 1119263 kubeadm.go:597] duration metric: took 4m20.078127881s to restartPrimaryControlPlane
	W0127 03:02:19.785891 1119263 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 03:02:19.785918 1119263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 03:02:16.101837 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:18.600416 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:15.612007 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:15.612503 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
	I0127 03:02:15.612532 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:15.612477 1121446 retry.go:31] will retry after 4.281984487s: waiting for domain to come up
	I0127 03:02:19.898517 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:19.899156 1121411 main.go:141] libmachine: (newest-cni-642127) found domain IP: 192.168.50.51
	I0127 03:02:19.899184 1121411 main.go:141] libmachine: (newest-cni-642127) reserving static IP address...
	I0127 03:02:19.899199 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has current primary IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:19.899706 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "newest-cni-642127", mac: "52:54:00:b2:c0:f5", ip: "192.168.50.51"} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:19.899748 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | skip adding static IP to network mk-newest-cni-642127 - found existing host DHCP lease matching {name: "newest-cni-642127", mac: "52:54:00:b2:c0:f5", ip: "192.168.50.51"}
	I0127 03:02:19.899765 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | Getting to WaitForSSH function...
	I0127 03:02:19.899786 1121411 main.go:141] libmachine: (newest-cni-642127) reserved static IP address 192.168.50.51 for domain newest-cni-642127
	I0127 03:02:19.899794 1121411 main.go:141] libmachine: (newest-cni-642127) waiting for SSH...
	I0127 03:02:19.902680 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:19.903077 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:19.903108 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:19.903425 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | Using SSH client type: external
	I0127 03:02:19.903455 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | Using SSH private key: /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa (-rw-------)
	I0127 03:02:19.903497 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 03:02:19.903528 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | About to run SSH command:
	I0127 03:02:19.903545 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | exit 0
	I0127 03:02:20.033236 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | SSH cmd err, output: <nil>: 
	I0127 03:02:20.033650 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetConfigRaw
	I0127 03:02:20.034423 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetIP
	I0127 03:02:20.037477 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.038000 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:20.038034 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.038292 1121411 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127/config.json ...
	I0127 03:02:20.038569 1121411 machine.go:93] provisionDockerMachine start ...
	I0127 03:02:20.038593 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:20.038817 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:20.041604 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.042029 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:20.042058 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.042374 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
	I0127 03:02:20.042730 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:20.042972 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:20.043158 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
	I0127 03:02:20.043362 1121411 main.go:141] libmachine: Using SSH client type: native
	I0127 03:02:20.043631 1121411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0127 03:02:20.043646 1121411 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 03:02:20.162052 1121411 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 03:02:20.162088 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetMachineName
	I0127 03:02:20.162389 1121411 buildroot.go:166] provisioning hostname "newest-cni-642127"
	I0127 03:02:20.162416 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetMachineName
	I0127 03:02:20.162603 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:20.166195 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.166703 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:20.166735 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.167015 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
	I0127 03:02:20.167255 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:20.167440 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:20.167629 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
	I0127 03:02:20.167847 1121411 main.go:141] libmachine: Using SSH client type: native
	I0127 03:02:20.168082 1121411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0127 03:02:20.168098 1121411 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-642127 && echo "newest-cni-642127" | sudo tee /etc/hostname
	I0127 03:02:19.208319 1119007 addons.go:514] duration metric: took 3.279531879s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0127 03:02:20.784826 1119007 pod_ready.go:103] pod "coredns-668d6bf9bc-86j6q" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:20.304578 1121411 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-642127
	
	I0127 03:02:20.304614 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:20.307961 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.308494 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:20.308576 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.308725 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
	I0127 03:02:20.308929 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:20.309194 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:20.309354 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
	I0127 03:02:20.309604 1121411 main.go:141] libmachine: Using SSH client type: native
	I0127 03:02:20.309846 1121411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0127 03:02:20.309865 1121411 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-642127' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-642127/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-642127' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 03:02:20.431545 1121411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 03:02:20.431586 1121411 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20316-1057178/.minikube CaCertPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20316-1057178/.minikube}
	I0127 03:02:20.431617 1121411 buildroot.go:174] setting up certificates
	I0127 03:02:20.431633 1121411 provision.go:84] configureAuth start
	I0127 03:02:20.431649 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetMachineName
	I0127 03:02:20.431999 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetIP
	I0127 03:02:20.435425 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.435885 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:20.435918 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.436172 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:20.439389 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.439969 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:20.440002 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.440288 1121411 provision.go:143] copyHostCerts
	I0127 03:02:20.440368 1121411 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.pem, removing ...
	I0127 03:02:20.440392 1121411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.pem
	I0127 03:02:20.440475 1121411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.pem (1078 bytes)
	I0127 03:02:20.440610 1121411 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-1057178/.minikube/cert.pem, removing ...
	I0127 03:02:20.440672 1121411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-1057178/.minikube/cert.pem
	I0127 03:02:20.440724 1121411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20316-1057178/.minikube/cert.pem (1123 bytes)
	I0127 03:02:20.440826 1121411 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-1057178/.minikube/key.pem, removing ...
	I0127 03:02:20.440838 1121411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-1057178/.minikube/key.pem
	I0127 03:02:20.440872 1121411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20316-1057178/.minikube/key.pem (1675 bytes)
	I0127 03:02:20.441000 1121411 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca-key.pem org=jenkins.newest-cni-642127 san=[127.0.0.1 192.168.50.51 localhost minikube newest-cni-642127]
	I0127 03:02:20.582957 1121411 provision.go:177] copyRemoteCerts
	I0127 03:02:20.583042 1121411 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 03:02:20.583082 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:20.586468 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.586937 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:20.586967 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.587297 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
	I0127 03:02:20.587493 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:20.587653 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
	I0127 03:02:20.587816 1121411 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa Username:docker}
	I0127 03:02:20.678286 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 03:02:20.710984 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0127 03:02:20.743521 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 03:02:20.776342 1121411 provision.go:87] duration metric: took 344.690364ms to configureAuth
	I0127 03:02:20.776390 1121411 buildroot.go:189] setting minikube options for container-runtime
	I0127 03:02:20.776645 1121411 config.go:182] Loaded profile config "newest-cni-642127": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 03:02:20.776665 1121411 machine.go:96] duration metric: took 738.080097ms to provisionDockerMachine
	I0127 03:02:20.776676 1121411 start.go:293] postStartSetup for "newest-cni-642127" (driver="kvm2")
	I0127 03:02:20.776689 1121411 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 03:02:20.776728 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:20.777166 1121411 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 03:02:20.777201 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:20.781262 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.781754 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:20.781782 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.782169 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
	I0127 03:02:20.782409 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:20.782633 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
	I0127 03:02:20.782886 1121411 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa Username:docker}
	I0127 03:02:20.877090 1121411 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 03:02:20.882893 1121411 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 03:02:20.882941 1121411 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-1057178/.minikube/addons for local assets ...
	I0127 03:02:20.883012 1121411 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-1057178/.minikube/files for local assets ...
	I0127 03:02:20.883121 1121411 filesync.go:149] local asset: /home/jenkins/minikube-integration/20316-1057178/.minikube/files/etc/ssl/certs/10644392.pem -> 10644392.pem in /etc/ssl/certs
	I0127 03:02:20.883262 1121411 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 03:02:20.897501 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/files/etc/ssl/certs/10644392.pem --> /etc/ssl/certs/10644392.pem (1708 bytes)
	I0127 03:02:20.927044 1121411 start.go:296] duration metric: took 150.330171ms for postStartSetup
	I0127 03:02:20.927103 1121411 fix.go:56] duration metric: took 20.579822967s for fixHost
	I0127 03:02:20.927133 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:20.930644 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.931093 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:20.931129 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:20.931414 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
	I0127 03:02:20.931717 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:20.931919 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:20.932105 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
	I0127 03:02:20.932280 1121411 main.go:141] libmachine: Using SSH client type: native
	I0127 03:02:20.932530 1121411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0127 03:02:20.932545 1121411 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 03:02:21.046461 1121411 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737946941.010071066
	
	I0127 03:02:21.046493 1121411 fix.go:216] guest clock: 1737946941.010071066
	I0127 03:02:21.046504 1121411 fix.go:229] Guest: 2025-01-27 03:02:21.010071066 +0000 UTC Remote: 2025-01-27 03:02:20.927108919 +0000 UTC m=+20.729857739 (delta=82.962147ms)
	I0127 03:02:21.046536 1121411 fix.go:200] guest clock delta is within tolerance: 82.962147ms
	I0127 03:02:21.046543 1121411 start.go:83] releasing machines lock for "newest-cni-642127", held for 20.699275534s
	I0127 03:02:21.046580 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:21.046929 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetIP
	I0127 03:02:21.050101 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:21.050549 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:21.050572 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:21.050930 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:21.051682 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:21.051910 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:21.052040 1121411 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 03:02:21.052128 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:21.052184 1121411 ssh_runner.go:195] Run: cat /version.json
	I0127 03:02:21.052219 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:21.055762 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:21.055836 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:21.056356 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:21.056389 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:21.056429 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:21.056447 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:21.056720 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
	I0127 03:02:21.056899 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:21.056974 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
	I0127 03:02:21.057099 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:21.057147 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
	I0127 03:02:21.057303 1121411 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa Username:docker}
	I0127 03:02:21.057708 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
	I0127 03:02:21.057902 1121411 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa Username:docker}
	I0127 03:02:21.169709 1121411 ssh_runner.go:195] Run: systemctl --version
	I0127 03:02:21.177622 1121411 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 03:02:21.184029 1121411 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 03:02:21.184112 1121411 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 03:02:21.202861 1121411 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 03:02:21.202890 1121411 start.go:495] detecting cgroup driver to use...
	I0127 03:02:21.202967 1121411 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 03:02:21.236110 1121411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 03:02:21.250683 1121411 docker.go:217] disabling cri-docker service (if available) ...
	I0127 03:02:21.250796 1121411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 03:02:21.266354 1121411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 03:02:21.284146 1121411 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 03:02:21.436406 1121411 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 03:02:21.620560 1121411 docker.go:233] disabling docker service ...
	I0127 03:02:21.620655 1121411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 03:02:21.639534 1121411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 03:02:21.657179 1121411 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 03:02:21.828676 1121411 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 03:02:21.993891 1121411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 03:02:22.011124 1121411 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 03:02:22.037734 1121411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 03:02:22.049863 1121411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 03:02:22.064327 1121411 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 03:02:22.064427 1121411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 03:02:22.080328 1121411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 03:02:22.093806 1121411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 03:02:22.106165 1121411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 03:02:22.117782 1121411 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 03:02:22.129650 1121411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 03:02:22.152872 1121411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 03:02:22.165020 1121411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 03:02:22.177918 1121411 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 03:02:22.188259 1121411 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 03:02:22.188355 1121411 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 03:02:22.204350 1121411 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 03:02:22.218093 1121411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:02:22.356619 1121411 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 03:02:22.385087 1121411 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0127 03:02:22.385172 1121411 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 03:02:22.389980 1121411 retry.go:31] will retry after 758.524819ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0127 03:02:23.148722 1121411 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 03:02:23.154533 1121411 start.go:563] Will wait 60s for crictl version
	I0127 03:02:23.154611 1121411 ssh_runner.go:195] Run: which crictl
	I0127 03:02:23.159040 1121411 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 03:02:23.200478 1121411 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0127 03:02:23.200579 1121411 ssh_runner.go:195] Run: containerd --version
	I0127 03:02:23.228424 1121411 ssh_runner.go:195] Run: containerd --version
	I0127 03:02:23.265392 1121411 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	I0127 03:02:23.266856 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetIP
	I0127 03:02:23.269741 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:23.270196 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:23.270231 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:23.270441 1121411 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0127 03:02:23.275461 1121411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 03:02:23.294081 1121411 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0127 03:02:21.866190 1119263 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.080241643s)
	I0127 03:02:21.866293 1119263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 03:02:21.886667 1119263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 03:02:21.901554 1119263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 03:02:21.915270 1119263 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 03:02:21.915296 1119263 kubeadm.go:157] found existing configuration files:
	
	I0127 03:02:21.915369 1119263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 03:02:21.929169 1119263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 03:02:21.929294 1119263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 03:02:21.942913 1119263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 03:02:21.956444 1119263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 03:02:21.956522 1119263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 03:02:21.970342 1119263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 03:02:21.989145 1119263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 03:02:21.989232 1119263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 03:02:22.001913 1119263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 03:02:22.013198 1119263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 03:02:22.013270 1119263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 03:02:22.026131 1119263 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 03:02:22.226370 1119263 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 03:02:20.601947 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:22.605621 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:23.295574 1121411 kubeadm.go:883] updating cluster {Name:newest-cni-642127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-642127 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 03:02:23.295756 1121411 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 03:02:23.295841 1121411 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 03:02:23.331579 1121411 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 03:02:23.331604 1121411 containerd.go:534] Images already preloaded, skipping extraction
	I0127 03:02:23.331661 1121411 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 03:02:23.368818 1121411 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 03:02:23.368848 1121411 cache_images.go:84] Images are preloaded, skipping loading
	I0127 03:02:23.368856 1121411 kubeadm.go:934] updating node { 192.168.50.51 8443 v1.32.1 containerd true true} ...
	I0127 03:02:23.369012 1121411 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-642127 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:newest-cni-642127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 03:02:23.369101 1121411 ssh_runner.go:195] Run: sudo crictl info
	I0127 03:02:23.405913 1121411 cni.go:84] Creating CNI manager for ""
	I0127 03:02:23.405949 1121411 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 03:02:23.405966 1121411 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0127 03:02:23.406015 1121411 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.51 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-642127 NodeName:newest-cni-642127 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 03:02:23.406210 1121411 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-642127"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.51"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.51"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 03:02:23.406291 1121411 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 03:02:23.418253 1121411 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 03:02:23.418339 1121411 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 03:02:23.431397 1121411 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0127 03:02:23.452908 1121411 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 03:02:23.474059 1121411 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I0127 03:02:23.494976 1121411 ssh_runner.go:195] Run: grep 192.168.50.51	control-plane.minikube.internal$ /etc/hosts
	I0127 03:02:23.499246 1121411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 03:02:23.512541 1121411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:02:23.648564 1121411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 03:02:23.667204 1121411 certs.go:68] Setting up /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127 for IP: 192.168.50.51
	I0127 03:02:23.667230 1121411 certs.go:194] generating shared ca certs ...
	I0127 03:02:23.667265 1121411 certs.go:226] acquiring lock for ca certs: {Name:mk567acc23cbe907605c03a2ec03c8e4859e8343 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:23.667447 1121411 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.key
	I0127 03:02:23.667526 1121411 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/proxy-client-ca.key
	I0127 03:02:23.667540 1121411 certs.go:256] generating profile certs ...
	I0127 03:02:23.667681 1121411 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127/client.key
	I0127 03:02:23.667777 1121411 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127/apiserver.key.fe27a200
	I0127 03:02:23.667863 1121411 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127/proxy-client.key
	I0127 03:02:23.668017 1121411 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/1064439.pem (1338 bytes)
	W0127 03:02:23.668071 1121411 certs.go:480] ignoring /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/1064439_empty.pem, impossibly tiny 0 bytes
	I0127 03:02:23.668085 1121411 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 03:02:23.668115 1121411 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem (1078 bytes)
	I0127 03:02:23.668143 1121411 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/cert.pem (1123 bytes)
	I0127 03:02:23.668177 1121411 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/key.pem (1675 bytes)
	I0127 03:02:23.668261 1121411 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/files/etc/ssl/certs/10644392.pem (1708 bytes)
	I0127 03:02:23.669195 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 03:02:23.715219 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 03:02:23.757555 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 03:02:23.797303 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 03:02:23.839764 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0127 03:02:23.889721 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 03:02:23.923393 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 03:02:23.953947 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 03:02:23.983760 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 03:02:24.016899 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/1064439.pem --> /usr/share/ca-certificates/1064439.pem (1338 bytes)
	I0127 03:02:24.060186 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/files/etc/ssl/certs/10644392.pem --> /usr/share/ca-certificates/10644392.pem (1708 bytes)
	I0127 03:02:24.099215 1121411 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 03:02:24.120841 1121411 ssh_runner.go:195] Run: openssl version
	I0127 03:02:24.127163 1121411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 03:02:24.139725 1121411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:02:24.144911 1121411 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 01:43 /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:02:24.145000 1121411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:02:24.153545 1121411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 03:02:24.167817 1121411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1064439.pem && ln -fs /usr/share/ca-certificates/1064439.pem /etc/ssl/certs/1064439.pem"
	I0127 03:02:24.182019 1121411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1064439.pem
	I0127 03:02:24.188811 1121411 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 01:51 /usr/share/ca-certificates/1064439.pem
	I0127 03:02:24.188883 1121411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1064439.pem
	I0127 03:02:24.196999 1121411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1064439.pem /etc/ssl/certs/51391683.0"
	I0127 03:02:24.209518 1121411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10644392.pem && ln -fs /usr/share/ca-certificates/10644392.pem /etc/ssl/certs/10644392.pem"
	I0127 03:02:24.221497 1121411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10644392.pem
	I0127 03:02:24.226538 1121411 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 01:51 /usr/share/ca-certificates/10644392.pem
	I0127 03:02:24.226618 1121411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10644392.pem
	I0127 03:02:24.233572 1121411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10644392.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 03:02:24.245296 1121411 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 03:02:24.250242 1121411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 03:02:24.256818 1121411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 03:02:24.264939 1121411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 03:02:24.272818 1121411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 03:02:24.280734 1121411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 03:02:24.289169 1121411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 03:02:24.296827 1121411 kubeadm.go:392] StartCluster: {Name:newest-cni-642127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-642127 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: S
ubnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 03:02:24.297003 1121411 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0127 03:02:24.297095 1121411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 03:02:24.345692 1121411 cri.go:89] found id: "a6a8a2ba8bcb96c16e32fee70afcff2fb691cfba8dc6b1d17319c1af0fb57e5c"
	I0127 03:02:24.345721 1121411 cri.go:89] found id: "2875317dc160c0cbfcb5f0fffa03054fa2f58e5ac3a8c285da1e902b27ff47ba"
	I0127 03:02:24.345726 1121411 cri.go:89] found id: "1cf120eb9e1f79c0f94b2450693e0ddb3e2be97f570ea7c1bd076d78e161f63f"
	I0127 03:02:24.345731 1121411 cri.go:89] found id: "7ed46ffcaf84e6803aa42840b6d1ad2e881baaab16a17a4ce2b4937e53de42cc"
	I0127 03:02:24.345736 1121411 cri.go:89] found id: "f7635245edb3222b276ec6cb742d2e37ae2d21613eef3d959d8e42317a2e1c71"
	I0127 03:02:24.345741 1121411 cri.go:89] found id: "6190f3df5366129319feab6d40d56f4b615cb6f059c4b8e91512bbd1b3943c19"
	I0127 03:02:24.345745 1121411 cri.go:89] found id: "22958da5ca6d5bc9ed8ce5b964ecf90f4ffa68a09d4b9760a64cb0233948db0a"
	I0127 03:02:24.345749 1121411 cri.go:89] found id: "6608f968238d3b18e99b5ae9b674c1a64c96b2d7b63769f917c4edc895804df3"
	I0127 03:02:24.345753 1121411 cri.go:89] found id: ""
	I0127 03:02:24.345806 1121411 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0127 03:02:24.363134 1121411 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-27T03:02:24Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0127 03:02:24.363233 1121411 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 03:02:24.377414 1121411 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 03:02:24.377441 1121411 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 03:02:24.377512 1121411 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 03:02:24.391116 1121411 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 03:02:24.392658 1121411 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-642127" does not appear in /home/jenkins/minikube-integration/20316-1057178/kubeconfig
	I0127 03:02:24.393662 1121411 kubeconfig.go:62] /home/jenkins/minikube-integration/20316-1057178/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-642127" cluster setting kubeconfig missing "newest-cni-642127" context setting]
	I0127 03:02:24.395074 1121411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-1057178/kubeconfig: {Name:mke4bd9fc891569e5d6830fdf173fa5043f6c0a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:24.406122 1121411 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 03:02:24.417412 1121411 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.51
	I0127 03:02:24.417457 1121411 kubeadm.go:1160] stopping kube-system containers ...
	I0127 03:02:24.417475 1121411 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0127 03:02:24.417545 1121411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 03:02:24.459011 1121411 cri.go:89] found id: "a6a8a2ba8bcb96c16e32fee70afcff2fb691cfba8dc6b1d17319c1af0fb57e5c"
	I0127 03:02:24.459043 1121411 cri.go:89] found id: "2875317dc160c0cbfcb5f0fffa03054fa2f58e5ac3a8c285da1e902b27ff47ba"
	I0127 03:02:24.459049 1121411 cri.go:89] found id: "1cf120eb9e1f79c0f94b2450693e0ddb3e2be97f570ea7c1bd076d78e161f63f"
	I0127 03:02:24.459055 1121411 cri.go:89] found id: "7ed46ffcaf84e6803aa42840b6d1ad2e881baaab16a17a4ce2b4937e53de42cc"
	I0127 03:02:24.459059 1121411 cri.go:89] found id: "f7635245edb3222b276ec6cb742d2e37ae2d21613eef3d959d8e42317a2e1c71"
	I0127 03:02:24.459065 1121411 cri.go:89] found id: "6190f3df5366129319feab6d40d56f4b615cb6f059c4b8e91512bbd1b3943c19"
	I0127 03:02:24.459069 1121411 cri.go:89] found id: "22958da5ca6d5bc9ed8ce5b964ecf90f4ffa68a09d4b9760a64cb0233948db0a"
	I0127 03:02:24.459074 1121411 cri.go:89] found id: "6608f968238d3b18e99b5ae9b674c1a64c96b2d7b63769f917c4edc895804df3"
	I0127 03:02:24.459079 1121411 cri.go:89] found id: ""
	I0127 03:02:24.459085 1121411 cri.go:252] Stopping containers: [a6a8a2ba8bcb96c16e32fee70afcff2fb691cfba8dc6b1d17319c1af0fb57e5c 2875317dc160c0cbfcb5f0fffa03054fa2f58e5ac3a8c285da1e902b27ff47ba 1cf120eb9e1f79c0f94b2450693e0ddb3e2be97f570ea7c1bd076d78e161f63f 7ed46ffcaf84e6803aa42840b6d1ad2e881baaab16a17a4ce2b4937e53de42cc f7635245edb3222b276ec6cb742d2e37ae2d21613eef3d959d8e42317a2e1c71 6190f3df5366129319feab6d40d56f4b615cb6f059c4b8e91512bbd1b3943c19 22958da5ca6d5bc9ed8ce5b964ecf90f4ffa68a09d4b9760a64cb0233948db0a 6608f968238d3b18e99b5ae9b674c1a64c96b2d7b63769f917c4edc895804df3]
	I0127 03:02:24.459142 1121411 ssh_runner.go:195] Run: which crictl
	I0127 03:02:24.463700 1121411 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 a6a8a2ba8bcb96c16e32fee70afcff2fb691cfba8dc6b1d17319c1af0fb57e5c 2875317dc160c0cbfcb5f0fffa03054fa2f58e5ac3a8c285da1e902b27ff47ba 1cf120eb9e1f79c0f94b2450693e0ddb3e2be97f570ea7c1bd076d78e161f63f 7ed46ffcaf84e6803aa42840b6d1ad2e881baaab16a17a4ce2b4937e53de42cc f7635245edb3222b276ec6cb742d2e37ae2d21613eef3d959d8e42317a2e1c71 6190f3df5366129319feab6d40d56f4b615cb6f059c4b8e91512bbd1b3943c19 22958da5ca6d5bc9ed8ce5b964ecf90f4ffa68a09d4b9760a64cb0233948db0a 6608f968238d3b18e99b5ae9b674c1a64c96b2d7b63769f917c4edc895804df3
	I0127 03:02:24.514136 1121411 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 03:02:24.533173 1121411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 03:02:24.546127 1121411 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 03:02:24.546153 1121411 kubeadm.go:157] found existing configuration files:
	
	I0127 03:02:24.546208 1121411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 03:02:24.557350 1121411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 03:02:24.557425 1121411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 03:02:24.568241 1121411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 03:02:24.579187 1121411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 03:02:24.579283 1121411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 03:02:24.590554 1121411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 03:02:24.603551 1121411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 03:02:24.603617 1121411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 03:02:24.617395 1121411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 03:02:24.630452 1121411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 03:02:24.630532 1121411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 03:02:24.642268 1121411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 03:02:24.652281 1121411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 03:02:24.829811 1121411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 03:02:23.282142 1119007 pod_ready.go:103] pod "coredns-668d6bf9bc-86j6q" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:25.286311 1119007 pod_ready.go:93] pod "coredns-668d6bf9bc-86j6q" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:25.286348 1119007 pod_ready.go:82] duration metric: took 9.012019717s for pod "coredns-668d6bf9bc-86j6q" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.286363 1119007 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-fk8cw" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.296155 1119007 pod_ready.go:93] pod "coredns-668d6bf9bc-fk8cw" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:25.296266 1119007 pod_ready.go:82] duration metric: took 9.891475ms for pod "coredns-668d6bf9bc-fk8cw" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.296304 1119007 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.306424 1119007 pod_ready.go:93] pod "etcd-no-preload-887091" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:25.306520 1119007 pod_ready.go:82] duration metric: took 10.178061ms for pod "etcd-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.306550 1119007 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.316320 1119007 pod_ready.go:93] pod "kube-apiserver-no-preload-887091" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:25.316353 1119007 pod_ready.go:82] duration metric: took 9.779811ms for pod "kube-apiserver-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.316368 1119007 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.324972 1119007 pod_ready.go:93] pod "kube-controller-manager-no-preload-887091" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:25.324998 1119007 pod_ready.go:82] duration metric: took 8.620263ms for pod "kube-controller-manager-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.325011 1119007 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-45pz6" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.682761 1119007 pod_ready.go:93] pod "kube-proxy-45pz6" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:25.682792 1119007 pod_ready.go:82] duration metric: took 357.773408ms for pod "kube-proxy-45pz6" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:25.682807 1119007 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:26.086323 1119007 pod_ready.go:93] pod "kube-scheduler-no-preload-887091" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:26.086365 1119007 pod_ready.go:82] duration metric: took 403.548355ms for pod "kube-scheduler-no-preload-887091" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:26.086378 1119007 pod_ready.go:39] duration metric: took 9.839373235s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:02:26.086398 1119007 api_server.go:52] waiting for apiserver process to appear ...
	I0127 03:02:26.086493 1119007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:26.115441 1119007 api_server.go:72] duration metric: took 10.186729821s to wait for apiserver process to appear ...
	I0127 03:02:26.115474 1119007 api_server.go:88] waiting for apiserver healthz status ...
	I0127 03:02:26.115503 1119007 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0127 03:02:26.125822 1119007 api_server.go:279] https://192.168.61.201:8443/healthz returned 200:
	ok
	I0127 03:02:26.127247 1119007 api_server.go:141] control plane version: v1.32.1
	I0127 03:02:26.127277 1119007 api_server.go:131] duration metric: took 11.792506ms to wait for apiserver health ...
	I0127 03:02:26.127289 1119007 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 03:02:26.285021 1119007 system_pods.go:59] 9 kube-system pods found
	I0127 03:02:26.285059 1119007 system_pods.go:61] "coredns-668d6bf9bc-86j6q" [9b85ae79-ae19-4cd1-a0da-0343c9e2801c] Running
	I0127 03:02:26.285067 1119007 system_pods.go:61] "coredns-668d6bf9bc-fk8cw" [c7075b92-233d-4a5a-b864-ef349d7125e7] Running
	I0127 03:02:26.285073 1119007 system_pods.go:61] "etcd-no-preload-887091" [45d4a5fc-797f-4d4a-9204-049ebcdc5647] Running
	I0127 03:02:26.285079 1119007 system_pods.go:61] "kube-apiserver-no-preload-887091" [11e7ea14-678a-408f-a722-8fedb984c086] Running
	I0127 03:02:26.285085 1119007 system_pods.go:61] "kube-controller-manager-no-preload-887091" [95d63381-33aa-428b-80b1-6e8ccf96b8a1] Running
	I0127 03:02:26.285089 1119007 system_pods.go:61] "kube-proxy-45pz6" [b3aa986f-d6d8-4050-8760-438aabd39bdc] Running
	I0127 03:02:26.285094 1119007 system_pods.go:61] "kube-scheduler-no-preload-887091" [5065d24f-256d-43ad-bd00-1d5868b7214d] Running
	I0127 03:02:26.285104 1119007 system_pods.go:61] "metrics-server-f79f97bbb-vshg4" [33ae36ed-d8a4-4d60-bcd0-1becf2d490bc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 03:02:26.285110 1119007 system_pods.go:61] "storage-provisioner" [127a1f13-b70c-4482-bd8b-14a6bf24b663] Running
	I0127 03:02:26.285121 1119007 system_pods.go:74] duration metric: took 157.824017ms to wait for pod list to return data ...
	I0127 03:02:26.285134 1119007 default_sa.go:34] waiting for default service account to be created ...
	I0127 03:02:26.480092 1119007 default_sa.go:45] found service account: "default"
	I0127 03:02:26.480128 1119007 default_sa.go:55] duration metric: took 194.984911ms for default service account to be created ...
	I0127 03:02:26.480141 1119007 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 03:02:26.688727 1119007 system_pods.go:87] 9 kube-system pods found
	I0127 03:02:25.099839 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:27.100451 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:29.599652 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:26.158504 1121411 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.328648156s)
	I0127 03:02:26.158550 1121411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 03:02:26.404894 1121411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 03:02:26.526530 1121411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 03:02:26.667432 1121411 api_server.go:52] waiting for apiserver process to appear ...
	I0127 03:02:26.667635 1121411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:27.167965 1121411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:27.667769 1121411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:27.702851 1121411 api_server.go:72] duration metric: took 1.03541528s to wait for apiserver process to appear ...
	I0127 03:02:27.702957 1121411 api_server.go:88] waiting for apiserver healthz status ...
	I0127 03:02:27.702996 1121411 api_server.go:253] Checking apiserver healthz at https://192.168.50.51:8443/healthz ...
	I0127 03:02:27.703762 1121411 api_server.go:269] stopped: https://192.168.50.51:8443/healthz: Get "https://192.168.50.51:8443/healthz": dial tcp 192.168.50.51:8443: connect: connection refused
	I0127 03:02:28.203377 1121411 api_server.go:253] Checking apiserver healthz at https://192.168.50.51:8443/healthz ...
	I0127 03:02:28.204135 1121411 api_server.go:269] stopped: https://192.168.50.51:8443/healthz: Get "https://192.168.50.51:8443/healthz": dial tcp 192.168.50.51:8443: connect: connection refused
	I0127 03:02:28.703884 1121411 api_server.go:253] Checking apiserver healthz at https://192.168.50.51:8443/healthz ...
	I0127 03:02:32.408333 1119263 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 03:02:32.408420 1119263 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 03:02:32.408564 1119263 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 03:02:32.408723 1119263 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 03:02:32.408850 1119263 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 03:02:32.408936 1119263 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 03:02:32.410600 1119263 out.go:235]   - Generating certificates and keys ...
	I0127 03:02:32.410694 1119263 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 03:02:32.410784 1119263 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 03:02:32.410899 1119263 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 03:02:32.410985 1119263 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 03:02:32.411061 1119263 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 03:02:32.411144 1119263 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 03:02:32.411243 1119263 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 03:02:32.411349 1119263 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 03:02:32.411474 1119263 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 03:02:32.411592 1119263 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 03:02:32.411654 1119263 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 03:02:32.411755 1119263 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 03:02:32.411823 1119263 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 03:02:32.411900 1119263 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 03:02:32.411957 1119263 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 03:02:32.412019 1119263 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 03:02:32.412077 1119263 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 03:02:32.412166 1119263 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 03:02:32.412460 1119263 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 03:02:32.415088 1119263 out.go:235]   - Booting up control plane ...
	I0127 03:02:32.415215 1119263 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 03:02:32.415349 1119263 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 03:02:32.415444 1119263 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 03:02:32.415597 1119263 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 03:02:32.415722 1119263 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 03:02:32.415772 1119263 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 03:02:32.415934 1119263 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 03:02:32.416041 1119263 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 03:02:32.416113 1119263 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001709036s
	I0127 03:02:32.416228 1119263 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 03:02:32.416326 1119263 kubeadm.go:310] [api-check] The API server is healthy after 6.003070171s
	I0127 03:02:32.416466 1119263 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 03:02:32.416619 1119263 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 03:02:32.416691 1119263 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 03:02:32.416890 1119263 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-264552 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 03:02:32.416990 1119263 kubeadm.go:310] [bootstrap-token] Using token: glfh41.djplehex31d2nmyn
	I0127 03:02:32.418322 1119263 out.go:235]   - Configuring RBAC rules ...
	I0127 03:02:32.418468 1119263 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 03:02:32.418553 1119263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 03:02:32.418749 1119263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 03:02:32.418932 1119263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 03:02:32.419089 1119263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 03:02:32.419214 1119263 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 03:02:32.419378 1119263 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 03:02:32.419436 1119263 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 03:02:32.419498 1119263 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 03:02:32.419505 1119263 kubeadm.go:310] 
	I0127 03:02:32.419581 1119263 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 03:02:32.419587 1119263 kubeadm.go:310] 
	I0127 03:02:32.419686 1119263 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 03:02:32.419696 1119263 kubeadm.go:310] 
	I0127 03:02:32.419729 1119263 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 03:02:32.419809 1119263 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 03:02:32.419880 1119263 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 03:02:32.419891 1119263 kubeadm.go:310] 
	I0127 03:02:32.419987 1119263 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 03:02:32.419998 1119263 kubeadm.go:310] 
	I0127 03:02:32.420067 1119263 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 03:02:32.420078 1119263 kubeadm.go:310] 
	I0127 03:02:32.420143 1119263 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 03:02:32.420236 1119263 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 03:02:32.420319 1119263 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 03:02:32.420330 1119263 kubeadm.go:310] 
	I0127 03:02:32.420421 1119263 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 03:02:32.420508 1119263 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 03:02:32.420519 1119263 kubeadm.go:310] 
	I0127 03:02:32.420616 1119263 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token glfh41.djplehex31d2nmyn \
	I0127 03:02:32.420750 1119263 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0bba8d4e4b3afb129d2d18e2e045cd48b3419c300ae73ce15b73c31a6c21b1ba \
	I0127 03:02:32.420781 1119263 kubeadm.go:310] 	--control-plane 
	I0127 03:02:32.420790 1119263 kubeadm.go:310] 
	I0127 03:02:32.420891 1119263 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 03:02:32.420902 1119263 kubeadm.go:310] 
	I0127 03:02:32.421036 1119263 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token glfh41.djplehex31d2nmyn \
	I0127 03:02:32.421192 1119263 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0bba8d4e4b3afb129d2d18e2e045cd48b3419c300ae73ce15b73c31a6c21b1ba 
	I0127 03:02:32.421210 1119263 cni.go:84] Creating CNI manager for ""
	I0127 03:02:32.421220 1119263 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 03:02:32.422542 1119263 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 03:02:30.820769 1121411 api_server.go:279] https://192.168.50.51:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 03:02:30.820809 1121411 api_server.go:103] status: https://192.168.50.51:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 03:02:30.820827 1121411 api_server.go:253] Checking apiserver healthz at https://192.168.50.51:8443/healthz ...
	I0127 03:02:30.840404 1121411 api_server.go:279] https://192.168.50.51:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 03:02:30.840436 1121411 api_server.go:103] status: https://192.168.50.51:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 03:02:31.203948 1121411 api_server.go:253] Checking apiserver healthz at https://192.168.50.51:8443/healthz ...
	I0127 03:02:31.209795 1121411 api_server.go:279] https://192.168.50.51:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 03:02:31.209820 1121411 api_server.go:103] status: https://192.168.50.51:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 03:02:31.703217 1121411 api_server.go:253] Checking apiserver healthz at https://192.168.50.51:8443/healthz ...
	I0127 03:02:31.724822 1121411 api_server.go:279] https://192.168.50.51:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 03:02:31.724862 1121411 api_server.go:103] status: https://192.168.50.51:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 03:02:32.203446 1121411 api_server.go:253] Checking apiserver healthz at https://192.168.50.51:8443/healthz ...
	I0127 03:02:32.210068 1121411 api_server.go:279] https://192.168.50.51:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 03:02:32.210100 1121411 api_server.go:103] status: https://192.168.50.51:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 03:02:32.703717 1121411 api_server.go:253] Checking apiserver healthz at https://192.168.50.51:8443/healthz ...
	I0127 03:02:32.709016 1121411 api_server.go:279] https://192.168.50.51:8443/healthz returned 200:
	ok
	I0127 03:02:32.719003 1121411 api_server.go:141] control plane version: v1.32.1
	I0127 03:02:32.719041 1121411 api_server.go:131] duration metric: took 5.016063652s to wait for apiserver health ...
	I0127 03:02:32.719055 1121411 cni.go:84] Creating CNI manager for ""
	I0127 03:02:32.719065 1121411 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 03:02:32.721101 1121411 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 03:02:32.722433 1121411 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 03:02:32.734857 1121411 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 03:02:32.761120 1121411 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 03:02:32.778500 1121411 system_pods.go:59] 9 kube-system pods found
	I0127 03:02:32.778547 1121411 system_pods.go:61] "coredns-668d6bf9bc-dscrm" [2869a26b-4522-43cd-8417-abc17b77dc7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 03:02:32.778558 1121411 system_pods.go:61] "coredns-668d6bf9bc-rcdv8" [7697dd25-c99a-4413-a242-54cca1d1e5e9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 03:02:32.778571 1121411 system_pods.go:61] "etcd-newest-cni-642127" [816ba553-68cb-4496-8dba-7839e9799916] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 03:02:32.778583 1121411 system_pods.go:61] "kube-apiserver-newest-cni-642127" [69c55a7c-148b-40ff-86ec-739c5f668a11] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 03:02:32.778596 1121411 system_pods.go:61] "kube-controller-manager-newest-cni-642127" [1b6d1085-c4fe-43f2-a9ab-320adeb6cd38] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 03:02:32.778608 1121411 system_pods.go:61] "kube-proxy-5q7mp" [1efd4424-3475-45e1-b80b-c941de90e34d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0127 03:02:32.778620 1121411 system_pods.go:61] "kube-scheduler-newest-cni-642127" [f1c81d74-2818-4093-9e69-19359cc3ff50] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 03:02:32.778631 1121411 system_pods.go:61] "metrics-server-f79f97bbb-47hqq" [7f6ccb13-e73f-4514-a639-e1297b545cf6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 03:02:32.778642 1121411 system_pods.go:61] "storage-provisioner" [ee06d1e8-0ae7-42c7-9c5b-d19fcfb83f40] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 03:02:32.778653 1121411 system_pods.go:74] duration metric: took 17.501517ms to wait for pod list to return data ...
	I0127 03:02:32.778667 1121411 node_conditions.go:102] verifying NodePressure condition ...
	I0127 03:02:32.783164 1121411 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 03:02:32.783201 1121411 node_conditions.go:123] node cpu capacity is 2
	I0127 03:02:32.783216 1121411 node_conditions.go:105] duration metric: took 4.539816ms to run NodePressure ...
	I0127 03:02:32.783239 1121411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 03:02:33.135340 1121411 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 03:02:33.148690 1121411 ops.go:34] apiserver oom_adj: -16
	I0127 03:02:33.148723 1121411 kubeadm.go:597] duration metric: took 8.771274475s to restartPrimaryControlPlane
	I0127 03:02:33.148739 1121411 kubeadm.go:394] duration metric: took 8.851928105s to StartCluster
	I0127 03:02:33.148766 1121411 settings.go:142] acquiring lock: {Name:mkfac79776d8549aa482287d1af528efdec15d32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:33.148862 1121411 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20316-1057178/kubeconfig
	I0127 03:02:33.150733 1121411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-1057178/kubeconfig: {Name:mke4bd9fc891569e5d6830fdf173fa5043f6c0a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:33.150984 1121411 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 03:02:33.151079 1121411 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 03:02:33.151202 1121411 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-642127"
	I0127 03:02:33.151222 1121411 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-642127"
	W0127 03:02:33.151238 1121411 addons.go:247] addon storage-provisioner should already be in state true
	I0127 03:02:33.151257 1121411 addons.go:69] Setting metrics-server=true in profile "newest-cni-642127"
	I0127 03:02:33.151258 1121411 addons.go:69] Setting default-storageclass=true in profile "newest-cni-642127"
	I0127 03:02:33.151284 1121411 addons.go:238] Setting addon metrics-server=true in "newest-cni-642127"
	I0127 03:02:33.151272 1121411 addons.go:69] Setting dashboard=true in profile "newest-cni-642127"
	W0127 03:02:33.151294 1121411 addons.go:247] addon metrics-server should already be in state true
	I0127 03:02:33.151294 1121411 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-642127"
	I0127 03:02:33.151315 1121411 addons.go:238] Setting addon dashboard=true in "newest-cni-642127"
	I0127 03:02:33.151313 1121411 config.go:182] Loaded profile config "newest-cni-642127": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	W0127 03:02:33.151325 1121411 addons.go:247] addon dashboard should already be in state true
	I0127 03:02:33.151330 1121411 host.go:66] Checking if "newest-cni-642127" exists ...
	I0127 03:02:33.151355 1121411 host.go:66] Checking if "newest-cni-642127" exists ...
	I0127 03:02:33.151285 1121411 host.go:66] Checking if "newest-cni-642127" exists ...
	I0127 03:02:33.151717 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:33.151747 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:33.151754 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:33.151760 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:33.151789 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:33.151793 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:33.151825 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:33.151865 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:33.152612 1121411 out.go:177] * Verifying Kubernetes components...
	I0127 03:02:33.154050 1121411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:02:33.169429 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33289
	I0127 03:02:33.169982 1121411 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:33.170451 1121411 main.go:141] libmachine: Using API Version  1
	I0127 03:02:33.170472 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:33.170815 1121411 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:33.171371 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36587
	I0127 03:02:33.171487 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:33.171528 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:33.171747 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39453
	I0127 03:02:33.171942 1121411 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:33.172289 1121411 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:33.172471 1121411 main.go:141] libmachine: Using API Version  1
	I0127 03:02:33.172498 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:33.172746 1121411 main.go:141] libmachine: Using API Version  1
	I0127 03:02:33.172766 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:33.172908 1121411 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:33.174172 1121411 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:33.174237 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetState
	I0127 03:02:33.175157 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45067
	I0127 03:02:33.175572 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:33.175616 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:33.175822 1121411 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:33.177792 1121411 addons.go:238] Setting addon default-storageclass=true in "newest-cni-642127"
	W0127 03:02:33.177817 1121411 addons.go:247] addon default-storageclass should already be in state true
	I0127 03:02:33.177848 1121411 host.go:66] Checking if "newest-cni-642127" exists ...
	I0127 03:02:33.178206 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:33.178256 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:33.178862 1121411 main.go:141] libmachine: Using API Version  1
	I0127 03:02:33.178892 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:33.179421 1121411 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:33.192581 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38383
	I0127 03:02:33.193097 1121411 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:33.193643 1121411 main.go:141] libmachine: Using API Version  1
	I0127 03:02:33.193668 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:33.194026 1121411 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:33.194248 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetState
	I0127 03:02:33.197497 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:33.199029 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36881
	I0127 03:02:33.199688 1121411 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:33.199789 1121411 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 03:02:33.200189 1121411 main.go:141] libmachine: Using API Version  1
	I0127 03:02:33.200217 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:33.200630 1121411 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:33.200826 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetState
	I0127 03:02:33.201177 1121411 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:02:33.201196 1121411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 03:02:33.201215 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:33.201773 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:33.201821 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:33.203099 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:33.204646 1121411 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 03:02:33.205709 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:33.206717 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:33.206782 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:33.207074 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
	I0127 03:02:33.207272 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:33.207453 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
	I0127 03:02:33.207613 1121411 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa Username:docker}
	I0127 03:02:33.208044 1121411 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 03:02:33.209101 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 03:02:33.209120 1121411 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 03:02:33.209140 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:33.212709 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:33.213133 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:33.213153 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:33.213451 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
	I0127 03:02:33.213632 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:33.213734 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
	I0127 03:02:33.213819 1121411 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa Username:docker}
	I0127 03:02:33.219861 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43957
	I0127 03:02:33.220403 1121411 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:33.220991 1121411 main.go:141] libmachine: Using API Version  1
	I0127 03:02:33.221024 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:33.221408 1121411 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:33.222196 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:33.222254 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:33.223731 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42359
	I0127 03:02:33.224051 1121411 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:33.224552 1121411 main.go:141] libmachine: Using API Version  1
	I0127 03:02:33.224573 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:33.224816 1121411 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:33.225077 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetState
	I0127 03:02:33.227906 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:33.229635 1121411 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 03:02:32.423722 1119263 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 03:02:32.436568 1119263 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 03:02:32.461950 1119263 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 03:02:32.462072 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:32.462109 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-264552 minikube.k8s.io/updated_at=2025_01_27T03_02_32_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95 minikube.k8s.io/name=embed-certs-264552 minikube.k8s.io/primary=true
	I0127 03:02:32.477721 1119263 ops.go:34] apiserver oom_adj: -16
	I0127 03:02:32.739220 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:33.239786 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:33.740039 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:34.239291 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:34.740312 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:31.600099 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:33.600177 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:33.231071 1121411 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 03:02:33.231090 1121411 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 03:02:33.231112 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:33.233979 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:33.234359 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:33.234412 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:33.234633 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
	I0127 03:02:33.234777 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:33.234927 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
	I0127 03:02:33.235147 1121411 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa Username:docker}
	I0127 03:02:33.243914 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40181
	I0127 03:02:33.244332 1121411 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:33.244875 1121411 main.go:141] libmachine: Using API Version  1
	I0127 03:02:33.244889 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:33.245272 1121411 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:33.245443 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetState
	I0127 03:02:33.247204 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
	I0127 03:02:33.247418 1121411 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 03:02:33.247429 1121411 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 03:02:33.247455 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
	I0127 03:02:33.250553 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:33.251030 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
	I0127 03:02:33.251045 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
	I0127 03:02:33.251208 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
	I0127 03:02:33.251359 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
	I0127 03:02:33.251505 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
	I0127 03:02:33.251611 1121411 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa Username:docker}
	I0127 03:02:33.375505 1121411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 03:02:33.394405 1121411 api_server.go:52] waiting for apiserver process to appear ...
	I0127 03:02:33.394507 1121411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:33.410947 1121411 api_server.go:72] duration metric: took 259.928237ms to wait for apiserver process to appear ...
	I0127 03:02:33.410983 1121411 api_server.go:88] waiting for apiserver healthz status ...
	I0127 03:02:33.411005 1121411 api_server.go:253] Checking apiserver healthz at https://192.168.50.51:8443/healthz ...
	I0127 03:02:33.416758 1121411 api_server.go:279] https://192.168.50.51:8443/healthz returned 200:
	ok
	I0127 03:02:33.418367 1121411 api_server.go:141] control plane version: v1.32.1
	I0127 03:02:33.418395 1121411 api_server.go:131] duration metric: took 7.402525ms to wait for apiserver health ...
	I0127 03:02:33.418407 1121411 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 03:02:33.424893 1121411 system_pods.go:59] 9 kube-system pods found
	I0127 03:02:33.424921 1121411 system_pods.go:61] "coredns-668d6bf9bc-dscrm" [2869a26b-4522-43cd-8417-abc17b77dc7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 03:02:33.424928 1121411 system_pods.go:61] "coredns-668d6bf9bc-rcdv8" [7697dd25-c99a-4413-a242-54cca1d1e5e9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 03:02:33.424936 1121411 system_pods.go:61] "etcd-newest-cni-642127" [816ba553-68cb-4496-8dba-7839e9799916] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 03:02:33.424965 1121411 system_pods.go:61] "kube-apiserver-newest-cni-642127" [69c55a7c-148b-40ff-86ec-739c5f668a11] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 03:02:33.424984 1121411 system_pods.go:61] "kube-controller-manager-newest-cni-642127" [1b6d1085-c4fe-43f2-a9ab-320adeb6cd38] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 03:02:33.424992 1121411 system_pods.go:61] "kube-proxy-5q7mp" [1efd4424-3475-45e1-b80b-c941de90e34d] Running
	I0127 03:02:33.424997 1121411 system_pods.go:61] "kube-scheduler-newest-cni-642127" [f1c81d74-2818-4093-9e69-19359cc3ff50] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 03:02:33.425005 1121411 system_pods.go:61] "metrics-server-f79f97bbb-47hqq" [7f6ccb13-e73f-4514-a639-e1297b545cf6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 03:02:33.425009 1121411 system_pods.go:61] "storage-provisioner" [ee06d1e8-0ae7-42c7-9c5b-d19fcfb83f40] Running
	I0127 03:02:33.425017 1121411 system_pods.go:74] duration metric: took 6.604015ms to wait for pod list to return data ...
	I0127 03:02:33.425027 1121411 default_sa.go:34] waiting for default service account to be created ...
	I0127 03:02:33.427992 1121411 default_sa.go:45] found service account: "default"
	I0127 03:02:33.428016 1121411 default_sa.go:55] duration metric: took 2.981475ms for default service account to be created ...
	I0127 03:02:33.428030 1121411 kubeadm.go:582] duration metric: took 277.019922ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 03:02:33.428053 1121411 node_conditions.go:102] verifying NodePressure condition ...
	I0127 03:02:33.431283 1121411 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 03:02:33.431303 1121411 node_conditions.go:123] node cpu capacity is 2
	I0127 03:02:33.431313 1121411 node_conditions.go:105] duration metric: took 3.254985ms to run NodePressure ...
	I0127 03:02:33.431324 1121411 start.go:241] waiting for startup goroutines ...
	I0127 03:02:33.462238 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 03:02:33.462261 1121411 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 03:02:33.476129 1121411 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 03:02:33.476162 1121411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 03:02:33.488754 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 03:02:33.488789 1121411 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 03:02:33.497073 1121411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 03:02:33.519522 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 03:02:33.519557 1121411 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 03:02:33.551868 1121411 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 03:02:33.551905 1121411 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 03:02:33.565343 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 03:02:33.565374 1121411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 03:02:33.600695 1121411 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 03:02:33.600720 1121411 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 03:02:33.602150 1121411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:02:33.632660 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 03:02:33.632694 1121411 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 03:02:33.652690 1121411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 03:02:33.705754 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 03:02:33.705786 1121411 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 03:02:33.793208 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 03:02:33.793261 1121411 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 03:02:33.881849 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 03:02:33.881884 1121411 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 03:02:33.979510 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 03:02:33.979542 1121411 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 03:02:34.040605 1121411 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:34.040637 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Close
	I0127 03:02:34.041032 1121411 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:34.041080 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | Closing plugin on server side
	I0127 03:02:34.041090 1121411 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:34.041113 1121411 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:34.041137 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Close
	I0127 03:02:34.041431 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | Closing plugin on server side
	I0127 03:02:34.041481 1121411 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:34.041493 1121411 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:34.058399 1121411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 03:02:34.104645 1121411 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:34.104666 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Close
	I0127 03:02:34.104999 1121411 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:34.105025 1121411 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:34.105046 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | Closing plugin on server side
	I0127 03:02:35.194812 1121411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.542086223s)
	I0127 03:02:35.194884 1121411 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:35.194899 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Close
	I0127 03:02:35.194665 1121411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.592471736s)
	I0127 03:02:35.194995 1121411 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:35.195010 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Close
	I0127 03:02:35.197298 1121411 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:35.197320 1121411 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:35.197331 1121411 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:35.197338 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Close
	I0127 03:02:35.197484 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | Closing plugin on server side
	I0127 03:02:35.197524 1121411 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:35.197543 1121411 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:35.197551 1121411 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:35.197563 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Close
	I0127 03:02:35.197565 1121411 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:35.197575 1121411 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:35.197591 1121411 addons.go:479] Verifying addon metrics-server=true in "newest-cni-642127"
	I0127 03:02:35.197806 1121411 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:35.197821 1121411 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:35.738350 1121411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.679893698s)
	I0127 03:02:35.738414 1121411 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:35.738431 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Close
	I0127 03:02:35.738859 1121411 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:35.738880 1121411 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:35.738897 1121411 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:35.738906 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Close
	I0127 03:02:35.739194 1121411 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:35.739211 1121411 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:35.739256 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | Closing plugin on server side
	I0127 03:02:35.740543 1121411 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-642127 addons enable metrics-server
	
	I0127 03:02:35.742112 1121411 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	I0127 03:02:35.743312 1121411 addons.go:514] duration metric: took 2.592255359s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
	I0127 03:02:35.743356 1121411 start.go:246] waiting for cluster config update ...
	I0127 03:02:35.743372 1121411 start.go:255] writing updated cluster config ...
	I0127 03:02:35.743643 1121411 ssh_runner.go:195] Run: rm -f paused
	I0127 03:02:35.802583 1121411 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 03:02:35.804271 1121411 out.go:177] * Done! kubectl is now configured to use "newest-cni-642127" cluster and "default" namespace by default
	I0127 03:02:35.240046 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:35.739577 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:36.239666 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:36.396540 1119263 kubeadm.go:1113] duration metric: took 3.934543669s to wait for elevateKubeSystemPrivileges
	I0127 03:02:36.396587 1119263 kubeadm.go:394] duration metric: took 4m36.765414047s to StartCluster
	I0127 03:02:36.396612 1119263 settings.go:142] acquiring lock: {Name:mkfac79776d8549aa482287d1af528efdec15d32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:36.396700 1119263 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20316-1057178/kubeconfig
	I0127 03:02:36.399283 1119263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-1057178/kubeconfig: {Name:mke4bd9fc891569e5d6830fdf173fa5043f6c0a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:36.399607 1119263 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 03:02:36.399896 1119263 config.go:182] Loaded profile config "embed-certs-264552": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 03:02:36.399967 1119263 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 03:02:36.400065 1119263 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-264552"
	I0127 03:02:36.400097 1119263 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-264552"
	W0127 03:02:36.400111 1119263 addons.go:247] addon storage-provisioner should already be in state true
	I0127 03:02:36.400147 1119263 host.go:66] Checking if "embed-certs-264552" exists ...
	I0127 03:02:36.400364 1119263 addons.go:69] Setting default-storageclass=true in profile "embed-certs-264552"
	I0127 03:02:36.400393 1119263 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-264552"
	I0127 03:02:36.400697 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:36.400746 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:36.400860 1119263 addons.go:69] Setting dashboard=true in profile "embed-certs-264552"
	I0127 03:02:36.400889 1119263 addons.go:238] Setting addon dashboard=true in "embed-certs-264552"
	I0127 03:02:36.400891 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	W0127 03:02:36.400899 1119263 addons.go:247] addon dashboard should already be in state true
	I0127 03:02:36.400934 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:36.400962 1119263 host.go:66] Checking if "embed-certs-264552" exists ...
	I0127 03:02:36.401007 1119263 addons.go:69] Setting metrics-server=true in profile "embed-certs-264552"
	I0127 03:02:36.401034 1119263 addons.go:238] Setting addon metrics-server=true in "embed-certs-264552"
	W0127 03:02:36.401044 1119263 addons.go:247] addon metrics-server should already be in state true
	I0127 03:02:36.401078 1119263 host.go:66] Checking if "embed-certs-264552" exists ...
	I0127 03:02:36.401508 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:36.401557 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:36.401777 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:36.401824 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:36.401991 1119263 out.go:177] * Verifying Kubernetes components...
	I0127 03:02:36.403910 1119263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:02:36.422683 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32813
	I0127 03:02:36.423177 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:36.423824 1119263 main.go:141] libmachine: Using API Version  1
	I0127 03:02:36.423851 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:36.424298 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:36.424516 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetState
	I0127 03:02:36.425635 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45749
	I0127 03:02:36.425994 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40999
	I0127 03:02:36.426142 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:36.426423 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:36.426703 1119263 main.go:141] libmachine: Using API Version  1
	I0127 03:02:36.426729 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:36.427088 1119263 main.go:141] libmachine: Using API Version  1
	I0127 03:02:36.427111 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:36.427288 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:36.427869 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:36.427910 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:36.429980 1119263 addons.go:238] Setting addon default-storageclass=true in "embed-certs-264552"
	W0127 03:02:36.429999 1119263 addons.go:247] addon default-storageclass should already be in state true
	I0127 03:02:36.430029 1119263 host.go:66] Checking if "embed-certs-264552" exists ...
	I0127 03:02:36.430409 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:36.430443 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:36.430902 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:36.431582 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:36.431620 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:36.449634 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42081
	I0127 03:02:36.450301 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:36.451062 1119263 main.go:141] libmachine: Using API Version  1
	I0127 03:02:36.451085 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:36.451525 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:36.452191 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:36.452239 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:36.455086 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40299
	I0127 03:02:36.455301 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41491
	I0127 03:02:36.455535 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:36.456246 1119263 main.go:141] libmachine: Using API Version  1
	I0127 03:02:36.456264 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:36.456672 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:36.456898 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetState
	I0127 03:02:36.458545 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37933
	I0127 03:02:36.459300 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:36.459602 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .DriverName
	I0127 03:02:36.460164 1119263 main.go:141] libmachine: Using API Version  1
	I0127 03:02:36.460195 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:36.461041 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:36.461379 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:36.461672 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:36.461676 1119263 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 03:02:36.461723 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:36.461915 1119263 main.go:141] libmachine: Using API Version  1
	I0127 03:02:36.461930 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:36.462520 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:36.462923 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetState
	I0127 03:02:36.465082 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .DriverName
	I0127 03:02:36.465338 1119263 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 03:02:36.466448 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 03:02:36.466474 1119263 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 03:02:36.466495 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHHostname
	I0127 03:02:36.466570 1119263 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 03:02:36.468155 1119263 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:02:36.468187 1119263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 03:02:36.468209 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHHostname
	I0127 03:02:36.470910 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 03:02:36.471779 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:7a:0a", ip: ""} in network mk-embed-certs-264552: {Iface:virbr1 ExpiryTime:2025-01-27 03:57:49 +0000 UTC Type:0 Mac:52:54:00:89:7a:0a Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:embed-certs-264552 Clientid:01:52:54:00:89:7a:0a}
	I0127 03:02:36.471818 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined IP address 192.168.39.145 and MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 03:02:36.472039 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHPort
	I0127 03:02:36.472253 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHKeyPath
	I0127 03:02:36.472399 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHUsername
	I0127 03:02:36.472572 1119263 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/embed-certs-264552/id_rsa Username:docker}
	I0127 03:02:36.475423 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34703
	I0127 03:02:36.476153 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:36.476804 1119263 main.go:141] libmachine: Using API Version  1
	I0127 03:02:36.476823 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:36.477245 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:36.477505 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetState
	I0127 03:02:36.479472 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .DriverName
	I0127 03:02:36.481333 1119263 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 03:02:36.481739 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40689
	I0127 03:02:36.482275 1119263 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:36.482837 1119263 main.go:141] libmachine: Using API Version  1
	I0127 03:02:36.482854 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:36.482868 1119263 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 03:02:36.482887 1119263 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 03:02:36.482910 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHHostname
	I0127 03:02:36.483231 1119263 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:36.483493 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetState
	I0127 03:02:36.486181 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .DriverName
	I0127 03:02:36.486454 1119263 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 03:02:36.486475 1119263 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 03:02:36.486493 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHHostname
	I0127 03:02:36.488039 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 03:02:36.488500 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:7a:0a", ip: ""} in network mk-embed-certs-264552: {Iface:virbr1 ExpiryTime:2025-01-27 03:57:49 +0000 UTC Type:0 Mac:52:54:00:89:7a:0a Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:embed-certs-264552 Clientid:01:52:54:00:89:7a:0a}
	I0127 03:02:36.488532 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined IP address 192.168.39.145 and MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 03:02:36.488756 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHPort
	I0127 03:02:36.488966 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHKeyPath
	I0127 03:02:36.489130 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHUsername
	I0127 03:02:36.489289 1119263 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/embed-certs-264552/id_rsa Username:docker}
	I0127 03:02:36.489612 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 03:02:36.489866 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:7a:0a", ip: ""} in network mk-embed-certs-264552: {Iface:virbr1 ExpiryTime:2025-01-27 03:57:49 +0000 UTC Type:0 Mac:52:54:00:89:7a:0a Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:embed-certs-264552 Clientid:01:52:54:00:89:7a:0a}
	I0127 03:02:36.489889 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined IP address 192.168.39.145 and MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 03:02:36.490026 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHPort
	I0127 03:02:36.490149 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHKeyPath
	I0127 03:02:36.490261 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHUsername
	I0127 03:02:36.490344 1119263 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/embed-certs-264552/id_rsa Username:docker}
	I0127 03:02:36.494271 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 03:02:36.494636 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:7a:0a", ip: ""} in network mk-embed-certs-264552: {Iface:virbr1 ExpiryTime:2025-01-27 03:57:49 +0000 UTC Type:0 Mac:52:54:00:89:7a:0a Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:embed-certs-264552 Clientid:01:52:54:00:89:7a:0a}
	I0127 03:02:36.494659 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined IP address 192.168.39.145 and MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
	I0127 03:02:36.495050 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHPort
	I0127 03:02:36.495292 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHKeyPath
	I0127 03:02:36.495511 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHUsername
	I0127 03:02:36.495682 1119263 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/embed-certs-264552/id_rsa Username:docker}
	I0127 03:02:36.737773 1119263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 03:02:36.826450 1119263 node_ready.go:35] waiting up to 6m0s for node "embed-certs-264552" to be "Ready" ...
	I0127 03:02:36.857580 1119263 node_ready.go:49] node "embed-certs-264552" has status "Ready":"True"
	I0127 03:02:36.857609 1119263 node_ready.go:38] duration metric: took 31.04815ms for node "embed-certs-264552" to be "Ready" ...
	I0127 03:02:36.857623 1119263 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:02:36.873458 1119263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 03:02:36.877540 1119263 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-mbkl2" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:36.957829 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 03:02:36.957866 1119263 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 03:02:37.005603 1119263 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 03:02:37.005635 1119263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 03:02:37.006377 1119263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:02:37.031565 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 03:02:37.031587 1119263 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 03:02:37.100245 1119263 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 03:02:37.100282 1119263 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 03:02:37.175281 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 03:02:37.175309 1119263 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 03:02:37.221791 1119263 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 03:02:37.221825 1119263 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 03:02:37.308268 1119263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 03:02:37.423632 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 03:02:37.423660 1119263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 03:02:37.588554 1119263 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:37.588586 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
	I0127 03:02:37.589111 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | Closing plugin on server side
	I0127 03:02:37.589130 1119263 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:37.589147 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:37.589162 1119263 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:37.589176 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
	I0127 03:02:37.589462 1119263 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:37.589483 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:37.634711 1119263 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:37.634744 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
	I0127 03:02:37.635023 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | Closing plugin on server side
	I0127 03:02:37.635065 1119263 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:37.635073 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:37.649206 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 03:02:37.649231 1119263 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 03:02:37.780671 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 03:02:37.780709 1119263 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 03:02:37.963118 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 03:02:37.963151 1119263 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 03:02:38.051717 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 03:02:38.051755 1119263 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 03:02:38.102698 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 03:02:38.102726 1119263 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 03:02:38.177754 1119263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 03:02:38.867496 1119263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.861076308s)
	I0127 03:02:38.867579 1119263 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:38.867594 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
	I0127 03:02:38.868010 1119263 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:38.868037 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:38.868054 1119263 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:38.868067 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
	I0127 03:02:38.868377 1119263 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:38.868397 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:38.923746 1119263 pod_ready.go:103] pod "coredns-668d6bf9bc-mbkl2" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:38.991645 1119263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.683326945s)
	I0127 03:02:38.991708 1119263 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:38.991728 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
	I0127 03:02:38.992116 1119263 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:38.992137 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:38.992146 1119263 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:38.992153 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
	I0127 03:02:38.992566 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | Closing plugin on server side
	I0127 03:02:38.992598 1119263 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:38.992624 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:38.992643 1119263 addons.go:479] Verifying addon metrics-server=true in "embed-certs-264552"
	I0127 03:02:39.990731 1119263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.812917797s)
	I0127 03:02:39.990802 1119263 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:39.990818 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
	I0127 03:02:39.991192 1119263 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:39.991223 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:39.991235 1119263 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:39.991246 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
	I0127 03:02:39.991554 1119263 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:39.991575 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:39.993095 1119263 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-264552 addons enable metrics-server
	
	I0127 03:02:39.994564 1119263 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0127 03:02:35.602346 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:38.100810 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:39.995898 1119263 addons.go:514] duration metric: took 3.595931069s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0127 03:02:40.888544 1119263 pod_ready.go:93] pod "coredns-668d6bf9bc-mbkl2" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:40.888568 1119263 pod_ready.go:82] duration metric: took 4.01099998s for pod "coredns-668d6bf9bc-mbkl2" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:40.888579 1119263 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-n5wn4" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:40.895910 1119263 pod_ready.go:93] pod "coredns-668d6bf9bc-n5wn4" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:40.895941 1119263 pod_ready.go:82] duration metric: took 7.354168ms for pod "coredns-668d6bf9bc-n5wn4" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:40.895955 1119263 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:40.900393 1119263 pod_ready.go:93] pod "etcd-embed-certs-264552" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:40.900415 1119263 pod_ready.go:82] duration metric: took 4.45357ms for pod "etcd-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:40.900426 1119263 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:40.908664 1119263 pod_ready.go:93] pod "kube-apiserver-embed-certs-264552" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:40.908686 1119263 pod_ready.go:82] duration metric: took 8.251039ms for pod "kube-apiserver-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:40.908697 1119263 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:42.917072 1119263 pod_ready.go:103] pod "kube-controller-manager-embed-certs-264552" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:44.927051 1119263 pod_ready.go:93] pod "kube-controller-manager-embed-certs-264552" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:44.927083 1119263 pod_ready.go:82] duration metric: took 4.01837775s for pod "kube-controller-manager-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:44.927096 1119263 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kwqqr" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:44.939727 1119263 pod_ready.go:93] pod "kube-proxy-kwqqr" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:44.939759 1119263 pod_ready.go:82] duration metric: took 12.654042ms for pod "kube-proxy-kwqqr" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:44.939772 1119263 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:44.966136 1119263 pod_ready.go:93] pod "kube-scheduler-embed-certs-264552" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:44.966165 1119263 pod_ready.go:82] duration metric: took 26.38251ms for pod "kube-scheduler-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:44.966178 1119263 pod_ready.go:39] duration metric: took 8.108541494s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:02:44.966199 1119263 api_server.go:52] waiting for apiserver process to appear ...
	I0127 03:02:44.966260 1119263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:40.598596 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:41.593185 1119269 pod_ready.go:82] duration metric: took 4m0.0010842s for pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace to be "Ready" ...
	E0127 03:02:41.593221 1119269 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 03:02:41.593251 1119269 pod_ready.go:39] duration metric: took 4m13.044846596s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:02:41.593292 1119269 kubeadm.go:597] duration metric: took 4m21.461431723s to restartPrimaryControlPlane
	W0127 03:02:41.593372 1119269 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 03:02:41.593408 1119269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 03:02:43.620030 1119269 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.026590178s)
	I0127 03:02:43.620115 1119269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 03:02:43.639142 1119269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 03:02:43.651292 1119269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 03:02:43.661667 1119269 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 03:02:43.661687 1119269 kubeadm.go:157] found existing configuration files:
	
	I0127 03:02:43.661733 1119269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0127 03:02:43.672110 1119269 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 03:02:43.672165 1119269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 03:02:43.683718 1119269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0127 03:02:43.693914 1119269 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 03:02:43.693983 1119269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 03:02:43.704250 1119269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0127 03:02:43.714202 1119269 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 03:02:43.714283 1119269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 03:02:43.724775 1119269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0127 03:02:43.734789 1119269 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 03:02:43.734857 1119269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 03:02:43.746079 1119269 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 03:02:43.925921 1119269 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 03:02:44.991380 1119263 api_server.go:72] duration metric: took 8.59171979s to wait for apiserver process to appear ...
	I0127 03:02:44.991410 1119263 api_server.go:88] waiting for apiserver healthz status ...
	I0127 03:02:44.991439 1119263 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8443/healthz ...
	I0127 03:02:44.997033 1119263 api_server.go:279] https://192.168.39.145:8443/healthz returned 200:
	ok
	I0127 03:02:44.998283 1119263 api_server.go:141] control plane version: v1.32.1
	I0127 03:02:44.998310 1119263 api_server.go:131] duration metric: took 6.891198ms to wait for apiserver health ...
	I0127 03:02:44.998321 1119263 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 03:02:45.087014 1119263 system_pods.go:59] 9 kube-system pods found
	I0127 03:02:45.087059 1119263 system_pods.go:61] "coredns-668d6bf9bc-mbkl2" [29059a1e-4228-4fbc-bf18-0de800cbb47a] Running
	I0127 03:02:45.087067 1119263 system_pods.go:61] "coredns-668d6bf9bc-n5wn4" [416b5ae4-f786-4b1e-a699-d688b967a6f4] Running
	I0127 03:02:45.087073 1119263 system_pods.go:61] "etcd-embed-certs-264552" [b2389caf-28fb-42d8-9912-8c3829f8bfd6] Running
	I0127 03:02:45.087079 1119263 system_pods.go:61] "kube-apiserver-embed-certs-264552" [0150043f-38b8-4946-84f1-0c9c7aaf7328] Running
	I0127 03:02:45.087084 1119263 system_pods.go:61] "kube-controller-manager-embed-certs-264552" [940554f4-564d-4939-a09a-0ea61e36ff6c] Running
	I0127 03:02:45.087090 1119263 system_pods.go:61] "kube-proxy-kwqqr" [85b35a19-646d-43a8-b90f-c5a5b4a93393] Running
	I0127 03:02:45.087096 1119263 system_pods.go:61] "kube-scheduler-embed-certs-264552" [4a578d9d-f487-4839-a23d-1ec267612f0d] Running
	I0127 03:02:45.087106 1119263 system_pods.go:61] "metrics-server-f79f97bbb-6dg5x" [4b9cd5d7-1055-45ea-8ac9-1a91b9246c0e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 03:02:45.087114 1119263 system_pods.go:61] "storage-provisioner" [4e4e1f9a-505b-4ed2-ad81-5543176f645a] Running
	I0127 03:02:45.087123 1119263 system_pods.go:74] duration metric: took 88.795129ms to wait for pod list to return data ...
	I0127 03:02:45.087134 1119263 default_sa.go:34] waiting for default service account to be created ...
	I0127 03:02:45.282547 1119263 default_sa.go:45] found service account: "default"
	I0127 03:02:45.282578 1119263 default_sa.go:55] duration metric: took 195.436382ms for default service account to be created ...
	I0127 03:02:45.282589 1119263 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 03:02:45.486513 1119263 system_pods.go:87] 9 kube-system pods found
	I0127 03:02:52.671028 1119269 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 03:02:52.671099 1119269 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 03:02:52.671206 1119269 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 03:02:52.671380 1119269 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 03:02:52.671539 1119269 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 03:02:52.671639 1119269 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 03:02:52.673297 1119269 out.go:235]   - Generating certificates and keys ...
	I0127 03:02:52.673383 1119269 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 03:02:52.673474 1119269 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 03:02:52.673554 1119269 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 03:02:52.673609 1119269 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 03:02:52.673670 1119269 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 03:02:52.673716 1119269 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 03:02:52.673767 1119269 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 03:02:52.673816 1119269 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 03:02:52.673876 1119269 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 03:02:52.673954 1119269 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 03:02:52.673999 1119269 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 03:02:52.674047 1119269 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 03:02:52.674108 1119269 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 03:02:52.674187 1119269 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 03:02:52.674263 1119269 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 03:02:52.674321 1119269 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 03:02:52.674367 1119269 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 03:02:52.674447 1119269 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 03:02:52.674507 1119269 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 03:02:52.675997 1119269 out.go:235]   - Booting up control plane ...
	I0127 03:02:52.676130 1119269 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 03:02:52.676280 1119269 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 03:02:52.676377 1119269 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 03:02:52.676517 1119269 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 03:02:52.676652 1119269 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 03:02:52.676719 1119269 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 03:02:52.676922 1119269 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 03:02:52.677082 1119269 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 03:02:52.677173 1119269 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001864315s
	I0127 03:02:52.677287 1119269 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 03:02:52.677368 1119269 kubeadm.go:310] [api-check] The API server is healthy after 5.001344194s
	I0127 03:02:52.677511 1119269 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 03:02:52.677653 1119269 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 03:02:52.677715 1119269 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 03:02:52.677867 1119269 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-717075 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 03:02:52.677952 1119269 kubeadm.go:310] [bootstrap-token] Using token: dptef9.zgjhm0hnxmak7ndp
	I0127 03:02:52.679531 1119269 out.go:235]   - Configuring RBAC rules ...
	I0127 03:02:52.679681 1119269 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 03:02:52.679793 1119269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 03:02:52.680000 1119269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 03:02:52.680151 1119269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 03:02:52.680307 1119269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 03:02:52.680415 1119269 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 03:02:52.680548 1119269 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 03:02:52.680611 1119269 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 03:02:52.680680 1119269 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 03:02:52.680690 1119269 kubeadm.go:310] 
	I0127 03:02:52.680769 1119269 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 03:02:52.680779 1119269 kubeadm.go:310] 
	I0127 03:02:52.680875 1119269 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 03:02:52.680886 1119269 kubeadm.go:310] 
	I0127 03:02:52.680922 1119269 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 03:02:52.681024 1119269 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 03:02:52.681096 1119269 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 03:02:52.681106 1119269 kubeadm.go:310] 
	I0127 03:02:52.681192 1119269 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 03:02:52.681208 1119269 kubeadm.go:310] 
	I0127 03:02:52.681275 1119269 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 03:02:52.681289 1119269 kubeadm.go:310] 
	I0127 03:02:52.681363 1119269 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 03:02:52.681491 1119269 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 03:02:52.681562 1119269 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 03:02:52.681568 1119269 kubeadm.go:310] 
	I0127 03:02:52.681636 1119269 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 03:02:52.681749 1119269 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 03:02:52.681759 1119269 kubeadm.go:310] 
	I0127 03:02:52.681896 1119269 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token dptef9.zgjhm0hnxmak7ndp \
	I0127 03:02:52.682053 1119269 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0bba8d4e4b3afb129d2d18e2e045cd48b3419c300ae73ce15b73c31a6c21b1ba \
	I0127 03:02:52.682085 1119269 kubeadm.go:310] 	--control-plane 
	I0127 03:02:52.682091 1119269 kubeadm.go:310] 
	I0127 03:02:52.682242 1119269 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 03:02:52.682259 1119269 kubeadm.go:310] 
	I0127 03:02:52.682381 1119269 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token dptef9.zgjhm0hnxmak7ndp \
	I0127 03:02:52.682532 1119269 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0bba8d4e4b3afb129d2d18e2e045cd48b3419c300ae73ce15b73c31a6c21b1ba 
	I0127 03:02:52.682561 1119269 cni.go:84] Creating CNI manager for ""
	I0127 03:02:52.682574 1119269 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 03:02:52.684226 1119269 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 03:02:52.685352 1119269 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 03:02:52.697398 1119269 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 03:02:52.719046 1119269 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 03:02:52.719104 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:52.719145 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-717075 minikube.k8s.io/updated_at=2025_01_27T03_02_52_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95 minikube.k8s.io/name=default-k8s-diff-port-717075 minikube.k8s.io/primary=true
	I0127 03:02:52.761799 1119269 ops.go:34] apiserver oom_adj: -16
	I0127 03:02:52.952929 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:53.453841 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:53.953656 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:54.453137 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:54.953750 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:55.453823 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:55.953104 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:56.453840 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:56.953721 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:57.072043 1119269 kubeadm.go:1113] duration metric: took 4.352992678s to wait for elevateKubeSystemPrivileges
	I0127 03:02:57.072116 1119269 kubeadm.go:394] duration metric: took 4m37.021077009s to StartCluster
	I0127 03:02:57.072145 1119269 settings.go:142] acquiring lock: {Name:mkfac79776d8549aa482287d1af528efdec15d32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:57.072271 1119269 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20316-1057178/kubeconfig
	I0127 03:02:57.073904 1119269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-1057178/kubeconfig: {Name:mke4bd9fc891569e5d6830fdf173fa5043f6c0a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:57.074254 1119269 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.17 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 03:02:57.074373 1119269 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 03:02:57.074508 1119269 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-717075"
	I0127 03:02:57.074520 1119269 config.go:182] Loaded profile config "default-k8s-diff-port-717075": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 03:02:57.074535 1119269 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-717075"
	W0127 03:02:57.074544 1119269 addons.go:247] addon storage-provisioner should already be in state true
	I0127 03:02:57.074540 1119269 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-717075"
	I0127 03:02:57.074579 1119269 host.go:66] Checking if "default-k8s-diff-port-717075" exists ...
	I0127 03:02:57.074576 1119269 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-717075"
	I0127 03:02:57.074572 1119269 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-717075"
	I0127 03:02:57.074588 1119269 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-717075"
	I0127 03:02:57.074605 1119269 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-717075"
	I0127 03:02:57.074614 1119269 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-717075"
	W0127 03:02:57.074616 1119269 addons.go:247] addon dashboard should already be in state true
	W0127 03:02:57.074623 1119269 addons.go:247] addon metrics-server should already be in state true
	I0127 03:02:57.074653 1119269 host.go:66] Checking if "default-k8s-diff-port-717075" exists ...
	I0127 03:02:57.074659 1119269 host.go:66] Checking if "default-k8s-diff-port-717075" exists ...
	I0127 03:02:57.075056 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:57.075068 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:57.075068 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:57.075121 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:57.075123 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:57.075163 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:57.075267 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:57.075353 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:57.081008 1119269 out.go:177] * Verifying Kubernetes components...
	I0127 03:02:57.082885 1119269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:02:57.094206 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34785
	I0127 03:02:57.094931 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:57.095746 1119269 main.go:141] libmachine: Using API Version  1
	I0127 03:02:57.095766 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:57.095843 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37149
	I0127 03:02:57.095963 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40171
	I0127 03:02:57.096377 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:57.096485 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:57.096649 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:57.097010 1119269 main.go:141] libmachine: Using API Version  1
	I0127 03:02:57.097039 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:57.097172 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:57.097228 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:57.097627 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:57.097906 1119269 main.go:141] libmachine: Using API Version  1
	I0127 03:02:57.097919 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:57.098237 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:57.098286 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:57.098455 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40437
	I0127 03:02:57.098935 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:57.099556 1119269 main.go:141] libmachine: Using API Version  1
	I0127 03:02:57.099578 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:57.099797 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:57.100439 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:57.100480 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:57.100698 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:57.100896 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetState
	I0127 03:02:57.105155 1119269 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-717075"
	W0127 03:02:57.105188 1119269 addons.go:247] addon default-storageclass should already be in state true
	I0127 03:02:57.105221 1119269 host.go:66] Checking if "default-k8s-diff-port-717075" exists ...
	I0127 03:02:57.105609 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:57.105668 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:57.121375 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36179
	I0127 03:02:57.121658 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39597
	I0127 03:02:57.121901 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:57.122123 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:57.122486 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39513
	I0127 03:02:57.122504 1119269 main.go:141] libmachine: Using API Version  1
	I0127 03:02:57.122523 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:57.122758 1119269 main.go:141] libmachine: Using API Version  1
	I0127 03:02:57.122778 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:57.122813 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35417
	I0127 03:02:57.122851 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:57.122923 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:57.123171 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetState
	I0127 03:02:57.123241 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:57.123868 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:57.123978 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetState
	I0127 03:02:57.123990 1119269 main.go:141] libmachine: Using API Version  1
	I0127 03:02:57.124007 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:57.124368 1119269 main.go:141] libmachine: Using API Version  1
	I0127 03:02:57.124387 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:57.124452 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:57.124681 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetState
	I0127 03:02:57.124733 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:57.125300 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 03:02:57.125347 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:57.126534 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .DriverName
	I0127 03:02:57.127123 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .DriverName
	I0127 03:02:57.127415 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .DriverName
	I0127 03:02:57.128921 1119269 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 03:02:57.128930 1119269 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 03:02:57.128931 1119269 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 03:02:57.130374 1119269 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 03:02:57.130393 1119269 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:02:57.130411 1119269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 03:02:57.130431 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHHostname
	I0127 03:02:57.130395 1119269 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 03:02:57.130396 1119269 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 03:02:57.130621 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHHostname
	I0127 03:02:57.132516 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 03:02:57.132532 1119269 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 03:02:57.132547 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHHostname
	I0127 03:02:57.135860 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 03:02:57.135912 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 03:02:57.136120 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 03:02:57.136644 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:da:ad", ip: ""} in network mk-default-k8s-diff-port-717075: {Iface:virbr4 ExpiryTime:2025-01-27 03:58:09 +0000 UTC Type:0 Mac:52:54:00:22:da:ad Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:default-k8s-diff-port-717075 Clientid:01:52:54:00:22:da:ad}
	I0127 03:02:57.136669 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined IP address 192.168.72.17 and MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 03:02:57.136702 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:da:ad", ip: ""} in network mk-default-k8s-diff-port-717075: {Iface:virbr4 ExpiryTime:2025-01-27 03:58:09 +0000 UTC Type:0 Mac:52:54:00:22:da:ad Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:default-k8s-diff-port-717075 Clientid:01:52:54:00:22:da:ad}
	I0127 03:02:57.136736 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined IP address 192.168.72.17 and MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 03:02:57.136747 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHPort
	I0127 03:02:57.136809 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:da:ad", ip: ""} in network mk-default-k8s-diff-port-717075: {Iface:virbr4 ExpiryTime:2025-01-27 03:58:09 +0000 UTC Type:0 Mac:52:54:00:22:da:ad Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:default-k8s-diff-port-717075 Clientid:01:52:54:00:22:da:ad}
	I0127 03:02:57.137008 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined IP address 192.168.72.17 and MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 03:02:57.136938 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHKeyPath
	I0127 03:02:57.137108 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHPort
	I0127 03:02:57.137179 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHUsername
	I0127 03:02:57.137309 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHKeyPath
	I0127 03:02:57.137376 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHPort
	I0127 03:02:57.137403 1119269 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/default-k8s-diff-port-717075/id_rsa Username:docker}
	I0127 03:02:57.137589 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHKeyPath
	I0127 03:02:57.137621 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHUsername
	I0127 03:02:57.137794 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHUsername
	I0127 03:02:57.138008 1119269 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/default-k8s-diff-port-717075/id_rsa Username:docker}
	I0127 03:02:57.138010 1119269 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/default-k8s-diff-port-717075/id_rsa Username:docker}
	I0127 03:02:57.152787 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35355
	I0127 03:02:57.153399 1119269 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:57.153967 1119269 main.go:141] libmachine: Using API Version  1
	I0127 03:02:57.154002 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:57.154377 1119269 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:57.154584 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetState
	I0127 03:02:57.156381 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .DriverName
	I0127 03:02:57.156603 1119269 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 03:02:57.156624 1119269 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 03:02:57.156649 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHHostname
	I0127 03:02:57.159499 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 03:02:57.159944 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:da:ad", ip: ""} in network mk-default-k8s-diff-port-717075: {Iface:virbr4 ExpiryTime:2025-01-27 03:58:09 +0000 UTC Type:0 Mac:52:54:00:22:da:ad Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:default-k8s-diff-port-717075 Clientid:01:52:54:00:22:da:ad}
	I0127 03:02:57.160261 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined IP address 192.168.72.17 and MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
	I0127 03:02:57.160520 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHPort
	I0127 03:02:57.160684 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHKeyPath
	I0127 03:02:57.163248 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHUsername
	I0127 03:02:57.164348 1119269 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/default-k8s-diff-port-717075/id_rsa Username:docker}
	I0127 03:02:57.378051 1119269 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 03:02:57.433542 1119269 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-717075" to be "Ready" ...
	I0127 03:02:57.474874 1119269 node_ready.go:49] node "default-k8s-diff-port-717075" has status "Ready":"True"
	I0127 03:02:57.474911 1119269 node_ready.go:38] duration metric: took 41.327465ms for node "default-k8s-diff-port-717075" to be "Ready" ...
	I0127 03:02:57.474926 1119269 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:02:57.483255 1119269 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-htglq" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:57.519688 1119269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:02:57.542506 1119269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 03:02:57.549073 1119269 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 03:02:57.549102 1119269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 03:02:57.584535 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 03:02:57.584568 1119269 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 03:02:57.655673 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 03:02:57.655711 1119269 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 03:02:57.690996 1119269 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 03:02:57.691028 1119269 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 03:02:57.822313 1119269 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 03:02:57.822349 1119269 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 03:02:57.834363 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 03:02:57.834392 1119269 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 03:02:57.911077 1119269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 03:02:58.019919 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 03:02:58.019953 1119269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 03:02:58.212111 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 03:02:58.212145 1119269 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 03:02:58.309353 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 03:02:58.309381 1119269 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 03:02:58.378582 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 03:02:58.378611 1119269 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 03:02:58.444731 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 03:02:58.444762 1119269 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 03:02:58.506703 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 03:02:58.506745 1119269 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 03:02:58.584131 1119269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 03:02:58.850852 1119269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.331110115s)
	I0127 03:02:58.850948 1119269 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:58.850973 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
	I0127 03:02:58.850970 1119269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.308397522s)
	I0127 03:02:58.851017 1119269 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:58.851054 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
	I0127 03:02:58.851306 1119269 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:58.851328 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:58.851341 1119269 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:58.851348 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
	I0127 03:02:58.851426 1119269 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:58.851444 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:58.851465 1119269 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:58.851476 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
	I0127 03:02:58.851634 1119269 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:58.851650 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:58.851693 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | Closing plugin on server side
	I0127 03:02:58.851740 1119269 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:58.851762 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | Closing plugin on server side
	I0127 03:02:58.851765 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:58.886972 1119269 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:58.887006 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
	I0127 03:02:58.887346 1119269 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:58.887369 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:59.219464 1119269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.308329693s)
	I0127 03:02:59.219531 1119269 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:59.219552 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
	I0127 03:02:59.219946 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | Closing plugin on server side
	I0127 03:02:59.220003 1119269 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:59.220024 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:59.220045 1119269 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:59.220059 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
	I0127 03:02:59.220303 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | Closing plugin on server side
	I0127 03:02:59.220340 1119269 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:59.220349 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:59.220364 1119269 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-717075"
	I0127 03:02:59.493877 1119269 pod_ready.go:93] pod "coredns-668d6bf9bc-htglq" in "kube-system" namespace has status "Ready":"True"
	I0127 03:02:59.493919 1119269 pod_ready.go:82] duration metric: took 2.010631788s for pod "coredns-668d6bf9bc-htglq" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:59.493932 1119269 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-pwz9n" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:00.135755 1119269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.551568283s)
	I0127 03:03:00.135819 1119269 main.go:141] libmachine: Making call to close driver server
	I0127 03:03:00.135831 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
	I0127 03:03:00.136153 1119269 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:03:00.136171 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:03:00.136179 1119269 main.go:141] libmachine: Making call to close driver server
	I0127 03:03:00.136187 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
	I0127 03:03:00.136181 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | Closing plugin on server side
	I0127 03:03:00.136446 1119269 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:03:00.136459 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:03:00.137984 1119269 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-717075 addons enable metrics-server
	
	I0127 03:03:00.139476 1119269 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 03:03:00.140933 1119269 addons.go:514] duration metric: took 3.06657827s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 03:03:01.501546 1119269 pod_ready.go:103] pod "coredns-668d6bf9bc-pwz9n" in "kube-system" namespace has status "Ready":"False"
	I0127 03:03:04.000116 1119269 pod_ready.go:103] pod "coredns-668d6bf9bc-pwz9n" in "kube-system" namespace has status "Ready":"False"
	I0127 03:03:05.002068 1119269 pod_ready.go:93] pod "coredns-668d6bf9bc-pwz9n" in "kube-system" namespace has status "Ready":"True"
	I0127 03:03:05.002134 1119269 pod_ready.go:82] duration metric: took 5.508188953s for pod "coredns-668d6bf9bc-pwz9n" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:05.002149 1119269 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:05.007136 1119269 pod_ready.go:93] pod "etcd-default-k8s-diff-port-717075" in "kube-system" namespace has status "Ready":"True"
	I0127 03:03:05.007163 1119269 pod_ready.go:82] duration metric: took 5.003743ms for pod "etcd-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:05.007173 1119269 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:06.013821 1119269 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-717075" in "kube-system" namespace has status "Ready":"True"
	I0127 03:03:06.013847 1119269 pod_ready.go:82] duration metric: took 1.006667196s for pod "kube-apiserver-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:06.013860 1119269 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:06.018661 1119269 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-717075" in "kube-system" namespace has status "Ready":"True"
	I0127 03:03:06.018683 1119269 pod_ready.go:82] duration metric: took 4.814763ms for pod "kube-controller-manager-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:06.018694 1119269 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nlkhv" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:06.022482 1119269 pod_ready.go:93] pod "kube-proxy-nlkhv" in "kube-system" namespace has status "Ready":"True"
	I0127 03:03:06.022500 1119269 pod_ready.go:82] duration metric: took 3.79842ms for pod "kube-proxy-nlkhv" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:06.022512 1119269 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:06.197960 1119269 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-717075" in "kube-system" namespace has status "Ready":"True"
	I0127 03:03:06.197986 1119269 pod_ready.go:82] duration metric: took 175.467759ms for pod "kube-scheduler-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:06.197995 1119269 pod_ready.go:39] duration metric: took 8.723057571s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:03:06.198012 1119269 api_server.go:52] waiting for apiserver process to appear ...
	I0127 03:03:06.198073 1119269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:03:06.215210 1119269 api_server.go:72] duration metric: took 9.140900628s to wait for apiserver process to appear ...
	I0127 03:03:06.215249 1119269 api_server.go:88] waiting for apiserver healthz status ...
	I0127 03:03:06.215273 1119269 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8444/healthz ...
	I0127 03:03:06.219951 1119269 api_server.go:279] https://192.168.72.17:8444/healthz returned 200:
	ok
	I0127 03:03:06.220901 1119269 api_server.go:141] control plane version: v1.32.1
	I0127 03:03:06.220922 1119269 api_server.go:131] duration metric: took 5.666132ms to wait for apiserver health ...
	I0127 03:03:06.220929 1119269 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 03:03:06.402128 1119269 system_pods.go:59] 9 kube-system pods found
	I0127 03:03:06.402165 1119269 system_pods.go:61] "coredns-668d6bf9bc-htglq" [2d4500a2-7bc9-4c25-af55-3c20257065c2] Running
	I0127 03:03:06.402172 1119269 system_pods.go:61] "coredns-668d6bf9bc-pwz9n" [cf6b7c7c-59eb-4901-88ba-a6e4556ddd4c] Running
	I0127 03:03:06.402177 1119269 system_pods.go:61] "etcd-default-k8s-diff-port-717075" [50fac615-6926-4023-8467-fa0c3fec39b2] Running
	I0127 03:03:06.402181 1119269 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-717075" [f86307a0-5994-4178-af8a-43613ed2bd63] Running
	I0127 03:03:06.402186 1119269 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-717075" [543f1b9a-da5a-4963-adc0-3bb2c88f2de0] Running
	I0127 03:03:06.402191 1119269 system_pods.go:61] "kube-proxy-nlkhv" [57c52d4f-937f-4fc8-98dd-9aa0531f8d17] Running
	I0127 03:03:06.402197 1119269 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-717075" [bb54f953-7c1f-4ce8-a590-7d029dcfea24] Running
	I0127 03:03:06.402205 1119269 system_pods.go:61] "metrics-server-f79f97bbb-fthnn" [fb8e4d08-fb1f-49a5-8984-44e975174502] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 03:03:06.402211 1119269 system_pods.go:61] "storage-provisioner" [0a7c6b15-4ec5-46cf-8f6e-d98c292af196] Running
	I0127 03:03:06.402225 1119269 system_pods.go:74] duration metric: took 181.288367ms to wait for pod list to return data ...
	I0127 03:03:06.402236 1119269 default_sa.go:34] waiting for default service account to be created ...
	I0127 03:03:06.598976 1119269 default_sa.go:45] found service account: "default"
	I0127 03:03:06.599007 1119269 default_sa.go:55] duration metric: took 196.76041ms for default service account to be created ...
	I0127 03:03:06.599017 1119269 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 03:03:06.802139 1119269 system_pods.go:87] 9 kube-system pods found
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	3d3abe6a81fe9       523cad1a4df73       4 minutes ago       Exited              dashboard-metrics-scraper   8                   8496d84cfcf21       dashboard-metrics-scraper-86c6bf9756-sldrm
	0bfe4ee9e99ee       07655ddf2eebe       21 minutes ago      Running             kubernetes-dashboard        0                   ceb0e0c4e1ba4       kubernetes-dashboard-7779f9b69b-wqrbr
	53bfeb0c76195       6e38f40d628db       21 minutes ago      Running             storage-provisioner         0                   af488621041e5       storage-provisioner
	facd69943f2dc       c69fa2e9cbf5f       21 minutes ago      Running             coredns                     0                   b7daca2d1c45e       coredns-668d6bf9bc-pwz9n
	6aac73b1e5292       c69fa2e9cbf5f       21 minutes ago      Running             coredns                     0                   adeaed4b8f921       coredns-668d6bf9bc-htglq
	604e3f7fc1034       e29f9c7391fd9       21 minutes ago      Running             kube-proxy                  0                   0ec3a27dc67d9       kube-proxy-nlkhv
	4525f21c6319f       2b0d6572d062c       21 minutes ago      Running             kube-scheduler              2                   33d0dd7956ad2       kube-scheduler-default-k8s-diff-port-717075
	641fc16ce35ff       a9e7e6b294baf       21 minutes ago      Running             etcd                        2                   353caffbfd044       etcd-default-k8s-diff-port-717075
	020146e7b79a0       95c0bda56fc4d       21 minutes ago      Running             kube-apiserver              2                   6aab35c048ac0       kube-apiserver-default-k8s-diff-port-717075
	3aeecd0c0d0fb       019ee182b58e2       21 minutes ago      Running             kube-controller-manager     2                   8ad005f9817fb       kube-controller-manager-default-k8s-diff-port-717075
	
	
	==> containerd <==
	Jan 27 03:14:22 default-k8s-diff-port-717075 containerd[552]: time="2025-01-27T03:14:22.073204950Z" level=info msg="CreateContainer within sandbox \"8496d84cfcf216155865468a73d0454291ae2c858d5663e6a5101aed0d2112ae\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:7,} returns container id \"9d57347dd8d3ca7d4f2387128caad6f30726cced2c9130a95f50bd8e29f24e64\""
	Jan 27 03:14:22 default-k8s-diff-port-717075 containerd[552]: time="2025-01-27T03:14:22.074240308Z" level=info msg="StartContainer for \"9d57347dd8d3ca7d4f2387128caad6f30726cced2c9130a95f50bd8e29f24e64\""
	Jan 27 03:14:22 default-k8s-diff-port-717075 containerd[552]: time="2025-01-27T03:14:22.159765303Z" level=info msg="StartContainer for \"9d57347dd8d3ca7d4f2387128caad6f30726cced2c9130a95f50bd8e29f24e64\" returns successfully"
	Jan 27 03:14:22 default-k8s-diff-port-717075 containerd[552]: time="2025-01-27T03:14:22.208526656Z" level=info msg="shim disconnected" id=9d57347dd8d3ca7d4f2387128caad6f30726cced2c9130a95f50bd8e29f24e64 namespace=k8s.io
	Jan 27 03:14:22 default-k8s-diff-port-717075 containerd[552]: time="2025-01-27T03:14:22.208675014Z" level=warning msg="cleaning up after shim disconnected" id=9d57347dd8d3ca7d4f2387128caad6f30726cced2c9130a95f50bd8e29f24e64 namespace=k8s.io
	Jan 27 03:14:22 default-k8s-diff-port-717075 containerd[552]: time="2025-01-27T03:14:22.208819004Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 27 03:14:23 default-k8s-diff-port-717075 containerd[552]: time="2025-01-27T03:14:23.151728504Z" level=info msg="RemoveContainer for \"fb3053171390403a791a8530e37846033807bcbb0bf4b9d74515197e7659ceee\""
	Jan 27 03:14:23 default-k8s-diff-port-717075 containerd[552]: time="2025-01-27T03:14:23.158153362Z" level=info msg="RemoveContainer for \"fb3053171390403a791a8530e37846033807bcbb0bf4b9d74515197e7659ceee\" returns successfully"
	Jan 27 03:18:55 default-k8s-diff-port-717075 containerd[552]: time="2025-01-27T03:18:55.042822400Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 03:18:55 default-k8s-diff-port-717075 containerd[552]: time="2025-01-27T03:18:55.050847472Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Jan 27 03:18:55 default-k8s-diff-port-717075 containerd[552]: time="2025-01-27T03:18:55.052740240Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Jan 27 03:18:55 default-k8s-diff-port-717075 containerd[552]: time="2025-01-27T03:18:55.052826880Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 27 03:19:33 default-k8s-diff-port-717075 containerd[552]: time="2025-01-27T03:19:33.045424901Z" level=info msg="CreateContainer within sandbox \"8496d84cfcf216155865468a73d0454291ae2c858d5663e6a5101aed0d2112ae\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,}"
	Jan 27 03:19:33 default-k8s-diff-port-717075 containerd[552]: time="2025-01-27T03:19:33.071549408Z" level=info msg="CreateContainer within sandbox \"8496d84cfcf216155865468a73d0454291ae2c858d5663e6a5101aed0d2112ae\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,} returns container id \"3d3abe6a81fe92bfde37078055c3fcfc51e093347ef672cc48a438bc00b6e545\""
	Jan 27 03:19:33 default-k8s-diff-port-717075 containerd[552]: time="2025-01-27T03:19:33.072218384Z" level=info msg="StartContainer for \"3d3abe6a81fe92bfde37078055c3fcfc51e093347ef672cc48a438bc00b6e545\""
	Jan 27 03:19:33 default-k8s-diff-port-717075 containerd[552]: time="2025-01-27T03:19:33.147559527Z" level=info msg="StartContainer for \"3d3abe6a81fe92bfde37078055c3fcfc51e093347ef672cc48a438bc00b6e545\" returns successfully"
	Jan 27 03:19:33 default-k8s-diff-port-717075 containerd[552]: time="2025-01-27T03:19:33.205922642Z" level=info msg="shim disconnected" id=3d3abe6a81fe92bfde37078055c3fcfc51e093347ef672cc48a438bc00b6e545 namespace=k8s.io
	Jan 27 03:19:33 default-k8s-diff-port-717075 containerd[552]: time="2025-01-27T03:19:33.206116754Z" level=warning msg="cleaning up after shim disconnected" id=3d3abe6a81fe92bfde37078055c3fcfc51e093347ef672cc48a438bc00b6e545 namespace=k8s.io
	Jan 27 03:19:33 default-k8s-diff-port-717075 containerd[552]: time="2025-01-27T03:19:33.206188285Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 27 03:19:33 default-k8s-diff-port-717075 containerd[552]: time="2025-01-27T03:19:33.918183345Z" level=info msg="RemoveContainer for \"9d57347dd8d3ca7d4f2387128caad6f30726cced2c9130a95f50bd8e29f24e64\""
	Jan 27 03:19:33 default-k8s-diff-port-717075 containerd[552]: time="2025-01-27T03:19:33.932508479Z" level=info msg="RemoveContainer for \"9d57347dd8d3ca7d4f2387128caad6f30726cced2c9130a95f50bd8e29f24e64\" returns successfully"
	Jan 27 03:23:58 default-k8s-diff-port-717075 containerd[552]: time="2025-01-27T03:23:58.050487692Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 03:23:58 default-k8s-diff-port-717075 containerd[552]: time="2025-01-27T03:23:58.059360049Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Jan 27 03:23:58 default-k8s-diff-port-717075 containerd[552]: time="2025-01-27T03:23:58.061441688Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 27 03:23:58 default-k8s-diff-port-717075 containerd[552]: time="2025-01-27T03:23:58.061580817Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	
	
	==> coredns [6aac73b1e529294fbd2a7c0fd956a9a842f39a129301e7994dc171eef6de3742] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [facd69943f2dce87aa80a42b4bfe761855ff7efa3945d1c138a1bff10d488fe9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-717075
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-717075
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95
	                    minikube.k8s.io/name=default-k8s-diff-port-717075
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T03_02_52_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 03:02:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-717075
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 03:24:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 03:19:23 +0000   Mon, 27 Jan 2025 03:02:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 03:19:23 +0000   Mon, 27 Jan 2025 03:02:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 03:19:23 +0000   Mon, 27 Jan 2025 03:02:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 03:19:23 +0000   Mon, 27 Jan 2025 03:02:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.17
	  Hostname:    default-k8s-diff-port-717075
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ba20774be55b4c98b1cbe77a5488cdec
	  System UUID:                ba20774b-e55b-4c98-b1cb-e77a5488cdec
	  Boot ID:                    1408cd7d-df40-4e85-add0-01e1343d627b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-htglq                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 coredns-668d6bf9bc-pwz9n                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-default-k8s-diff-port-717075                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-717075             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-717075    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-nlkhv                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-717075             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-f79f97bbb-fthnn                          100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-sldrm              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-wqrbr                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 21m   kube-proxy       
	  Normal  Starting                 21m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m   kubelet          Node default-k8s-diff-port-717075 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m   kubelet          Node default-k8s-diff-port-717075 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m   kubelet          Node default-k8s-diff-port-717075 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m   node-controller  Node default-k8s-diff-port-717075 event: Registered Node default-k8s-diff-port-717075 in Controller
	
	
	==> dmesg <==
	[  +0.053638] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044741] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.162854] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.989760] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.655394] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.828703] systemd-fstab-generator[476]: Ignoring "noauto" option for root device
	[  +0.061858] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064958] systemd-fstab-generator[488]: Ignoring "noauto" option for root device
	[  +0.198536] systemd-fstab-generator[502]: Ignoring "noauto" option for root device
	[  +0.126162] systemd-fstab-generator[514]: Ignoring "noauto" option for root device
	[  +0.330506] systemd-fstab-generator[544]: Ignoring "noauto" option for root device
	[  +1.306935] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +2.370691] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +0.885905] kauditd_printk_skb: 225 callbacks suppressed
	[  +5.043103] kauditd_printk_skb: 40 callbacks suppressed
	[ +12.342477] kauditd_printk_skb: 80 callbacks suppressed
	[Jan27 03:02] systemd-fstab-generator[3077]: Ignoring "noauto" option for root device
	[  +6.907440] systemd-fstab-generator[3448]: Ignoring "noauto" option for root device
	[  +0.081983] kauditd_printk_skb: 87 callbacks suppressed
	[  +5.299890] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.062829] systemd-fstab-generator[3591]: Ignoring "noauto" option for root device
	[Jan27 03:03] kauditd_printk_skb: 112 callbacks suppressed
	[ +16.720374] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [641fc16ce35ff412d6e2f723c33a7a936f3646fe9d83b5d9352f6e50e57470be] <==
	{"level":"info","ts":"2025-01-27T03:02:47.584447Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T03:02:47.584991Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-27T03:02:47.579478Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T03:02:47.589395Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-27T03:02:47.591413Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-01-27T03:02:47.591935Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b3fc7be5b2b4dfe6","local-member-id":"360d66ce47bc1c11","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T03:02:47.593494Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T03:02:47.593756Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T03:02:47.599822Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T03:02:47.602846Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.17:2379"}
	{"level":"info","ts":"2025-01-27T03:03:16.305386Z","caller":"traceutil/trace.go:171","msg":"trace[670913887] linearizableReadLoop","detail":"{readStateIndex:514; appliedIndex:513; }","duration":"346.241235ms","start":"2025-01-27T03:03:15.957973Z","end":"2025-01-27T03:03:16.304215Z","steps":["trace[670913887] 'read index received'  (duration: 342.816865ms)","trace[670913887] 'applied index is now lower than readState.Index'  (duration: 3.423073ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T03:03:16.306086Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"347.813082ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T03:03:16.306157Z","caller":"traceutil/trace.go:171","msg":"trace[2063245424] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:503; }","duration":"348.213993ms","start":"2025-01-27T03:03:15.957929Z","end":"2025-01-27T03:03:16.306143Z","steps":["trace[2063245424] 'agreement among raft nodes before linearized reading'  (duration: 347.807779ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T03:03:16.306198Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T03:03:15.957912Z","time spent":"348.265013ms","remote":"127.0.0.1:54790","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-01-27T03:03:16.304569Z","caller":"traceutil/trace.go:171","msg":"trace[814703626] transaction","detail":"{read_only:false; response_revision:503; number_of_response:1; }","duration":"435.012311ms","start":"2025-01-27T03:03:15.869261Z","end":"2025-01-27T03:03:16.304273Z","steps":["trace[814703626] 'process raft request'  (duration: 431.926356ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T03:03:16.313386Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T03:03:15.869242Z","time spent":"438.237813ms","remote":"127.0.0.1:54764","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:500 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-01-27T03:12:47.653492Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":836}
	{"level":"info","ts":"2025-01-27T03:12:47.693264Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":836,"took":"38.686296ms","hash":3163270779,"current-db-size-bytes":2961408,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":2961408,"current-db-size-in-use":"3.0 MB"}
	{"level":"info","ts":"2025-01-27T03:12:47.693526Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3163270779,"revision":836,"compact-revision":-1}
	{"level":"info","ts":"2025-01-27T03:17:47.662927Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1088}
	{"level":"info","ts":"2025-01-27T03:17:47.667601Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1088,"took":"4.025884ms","hash":1883203239,"current-db-size-bytes":2961408,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":1765376,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-27T03:17:47.667737Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1883203239,"revision":1088,"compact-revision":836}
	{"level":"info","ts":"2025-01-27T03:22:47.670184Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1340}
	{"level":"info","ts":"2025-01-27T03:22:47.675032Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1340,"took":"3.800384ms","hash":2690785891,"current-db-size-bytes":2961408,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":1744896,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-01-27T03:22:47.675220Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2690785891,"revision":1340,"compact-revision":1088}
	
	
	==> kernel <==
	 03:24:23 up 26 min,  0 users,  load average: 0.01, 0.15, 0.16
	Linux default-k8s-diff-port-717075 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [020146e7b79a07f89b5adbb22cea5e63f4047b9b81b35282704443e3ac44ecf8] <==
	I0127 03:20:50.219867       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 03:20:50.221168       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 03:22:49.219611       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 03:22:49.220068       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0127 03:22:50.223256       1 handler_proxy.go:99] no RequestInfo found in the context
	W0127 03:22:50.223592       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 03:22:50.223736       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0127 03:22:50.223881       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0127 03:22:50.224951       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 03:22:50.224986       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 03:23:50.225538       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 03:23:50.225758       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0127 03:23:50.225936       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 03:23:50.226000       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0127 03:23:50.226913       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 03:23:50.227172       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [3aeecd0c0d0fb5ee50b8c2fceb7080b5fb33a10f9d25de17f34a73fdf076647f] <==
	I0127 03:19:17.057659       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="92.197µs"
	I0127 03:19:23.444242       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-717075"
	E0127 03:19:25.921713       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:19:26.039457       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 03:19:33.936010       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="88.847µs"
	I0127 03:19:34.980238       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="1.978639ms"
	E0127 03:19:55.928371       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:19:56.050662       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:20:25.934408       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:20:26.058750       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:20:55.941997       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:20:56.070105       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:21:25.950420       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:21:26.078824       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:21:55.958688       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:21:56.088630       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:22:25.965218       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:22:26.097248       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:22:55.972240       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:22:56.109598       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:23:25.979618       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:23:26.118278       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:23:55.987797       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:23:56.126810       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 03:24:13.060178       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="336.832µs"
	
	
	==> kube-proxy [604e3f7fc1034c962aed5d66d4bbc64ad88a4a1524d2b1e4e806638c030193cf] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 03:02:57.627120       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 03:02:57.663145       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.72.17"]
	E0127 03:02:57.663242       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 03:02:57.754572       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 03:02:57.754600       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 03:02:57.754621       1 server_linux.go:170] "Using iptables Proxier"
	I0127 03:02:57.789615       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 03:02:57.789916       1 server.go:497] "Version info" version="v1.32.1"
	I0127 03:02:57.789934       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 03:02:57.800819       1 config.go:199] "Starting service config controller"
	I0127 03:02:57.800862       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 03:02:57.800885       1 config.go:105] "Starting endpoint slice config controller"
	I0127 03:02:57.800889       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 03:02:57.806415       1 config.go:329] "Starting node config controller"
	I0127 03:02:57.806429       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 03:02:57.900975       1 shared_informer.go:320] Caches are synced for service config
	I0127 03:02:57.901037       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 03:02:57.906488       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4525f21c6319f2f0719b72f8e7dcb752b18fe2be1dc19f49840fbfe524d00ce2] <==
	W0127 03:02:49.302190       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0127 03:02:49.302890       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:50.174282       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0127 03:02:50.174579       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:50.180340       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0127 03:02:50.180877       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:50.231114       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 03:02:50.231462       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:50.236609       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 03:02:50.236715       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:50.303977       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 03:02:50.304389       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:50.331136       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0127 03:02:50.331501       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:50.358454       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0127 03:02:50.358770       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:50.400238       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0127 03:02:50.400619       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:50.424347       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0127 03:02:50.424686       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:50.468516       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 03:02:50.469634       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:50.748384       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 03:02:50.749045       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0127 03:02:52.682169       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 03:23:19 default-k8s-diff-port-717075 kubelet[3455]: I0127 03:23:19.042036    3455 scope.go:117] "RemoveContainer" containerID="3d3abe6a81fe92bfde37078055c3fcfc51e093347ef672cc48a438bc00b6e545"
	Jan 27 03:23:19 default-k8s-diff-port-717075 kubelet[3455]: E0127 03:23:19.042260    3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-sldrm_kubernetes-dashboard(55a77dc9-5452-49c6-a419-5484c7563685)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-sldrm" podUID="55a77dc9-5452-49c6-a419-5484c7563685"
	Jan 27 03:23:20 default-k8s-diff-port-717075 kubelet[3455]: E0127 03:23:20.043507    3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-fthnn" podUID="fb8e4d08-fb1f-49a5-8984-44e975174502"
	Jan 27 03:23:30 default-k8s-diff-port-717075 kubelet[3455]: I0127 03:23:30.041813    3455 scope.go:117] "RemoveContainer" containerID="3d3abe6a81fe92bfde37078055c3fcfc51e093347ef672cc48a438bc00b6e545"
	Jan 27 03:23:30 default-k8s-diff-port-717075 kubelet[3455]: E0127 03:23:30.042030    3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-sldrm_kubernetes-dashboard(55a77dc9-5452-49c6-a419-5484c7563685)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-sldrm" podUID="55a77dc9-5452-49c6-a419-5484c7563685"
	Jan 27 03:23:34 default-k8s-diff-port-717075 kubelet[3455]: E0127 03:23:34.043375    3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-fthnn" podUID="fb8e4d08-fb1f-49a5-8984-44e975174502"
	Jan 27 03:23:41 default-k8s-diff-port-717075 kubelet[3455]: I0127 03:23:41.041261    3455 scope.go:117] "RemoveContainer" containerID="3d3abe6a81fe92bfde37078055c3fcfc51e093347ef672cc48a438bc00b6e545"
	Jan 27 03:23:41 default-k8s-diff-port-717075 kubelet[3455]: E0127 03:23:41.041553    3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-sldrm_kubernetes-dashboard(55a77dc9-5452-49c6-a419-5484c7563685)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-sldrm" podUID="55a77dc9-5452-49c6-a419-5484c7563685"
	Jan 27 03:23:47 default-k8s-diff-port-717075 kubelet[3455]: E0127 03:23:47.043221    3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-fthnn" podUID="fb8e4d08-fb1f-49a5-8984-44e975174502"
	Jan 27 03:23:52 default-k8s-diff-port-717075 kubelet[3455]: E0127 03:23:52.092622    3455 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 03:23:52 default-k8s-diff-port-717075 kubelet[3455]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 03:23:52 default-k8s-diff-port-717075 kubelet[3455]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 03:23:52 default-k8s-diff-port-717075 kubelet[3455]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 03:23:52 default-k8s-diff-port-717075 kubelet[3455]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 03:23:55 default-k8s-diff-port-717075 kubelet[3455]: I0127 03:23:55.041696    3455 scope.go:117] "RemoveContainer" containerID="3d3abe6a81fe92bfde37078055c3fcfc51e093347ef672cc48a438bc00b6e545"
	Jan 27 03:23:55 default-k8s-diff-port-717075 kubelet[3455]: E0127 03:23:55.042242    3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-sldrm_kubernetes-dashboard(55a77dc9-5452-49c6-a419-5484c7563685)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-sldrm" podUID="55a77dc9-5452-49c6-a419-5484c7563685"
	Jan 27 03:23:58 default-k8s-diff-port-717075 kubelet[3455]: E0127 03:23:58.062021    3455 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 27 03:23:58 default-k8s-diff-port-717075 kubelet[3455]: E0127 03:23:58.062119    3455 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 27 03:23:58 default-k8s-diff-port-717075 kubelet[3455]: E0127 03:23:58.062416    3455 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gpxhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountP
ropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile
:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-f79f97bbb-fthnn_kube-system(fb8e4d08-fb1f-49a5-8984-44e975174502): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Jan 27 03:23:58 default-k8s-diff-port-717075 kubelet[3455]: E0127 03:23:58.063891    3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-fthnn" podUID="fb8e4d08-fb1f-49a5-8984-44e975174502"
	Jan 27 03:24:06 default-k8s-diff-port-717075 kubelet[3455]: I0127 03:24:06.044122    3455 scope.go:117] "RemoveContainer" containerID="3d3abe6a81fe92bfde37078055c3fcfc51e093347ef672cc48a438bc00b6e545"
	Jan 27 03:24:06 default-k8s-diff-port-717075 kubelet[3455]: E0127 03:24:06.044391    3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-sldrm_kubernetes-dashboard(55a77dc9-5452-49c6-a419-5484c7563685)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-sldrm" podUID="55a77dc9-5452-49c6-a419-5484c7563685"
	Jan 27 03:24:13 default-k8s-diff-port-717075 kubelet[3455]: E0127 03:24:13.043110    3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-fthnn" podUID="fb8e4d08-fb1f-49a5-8984-44e975174502"
	Jan 27 03:24:18 default-k8s-diff-port-717075 kubelet[3455]: I0127 03:24:18.041969    3455 scope.go:117] "RemoveContainer" containerID="3d3abe6a81fe92bfde37078055c3fcfc51e093347ef672cc48a438bc00b6e545"
	Jan 27 03:24:18 default-k8s-diff-port-717075 kubelet[3455]: E0127 03:24:18.042755    3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-sldrm_kubernetes-dashboard(55a77dc9-5452-49c6-a419-5484c7563685)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-sldrm" podUID="55a77dc9-5452-49c6-a419-5484c7563685"
	
	
	==> kubernetes-dashboard [0bfe4ee9e99ee5d74e9eeac16a04cf31c1286c6b4ce4bb8b32f488d0406a3e77] <==
	2025/01/27 03:12:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:12:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:13:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:13:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:14:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:14:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:15:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:15:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:16:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:16:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:17:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:17:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:18:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:18:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:19:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:19:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:20:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:20:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:21:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:21:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:22:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:22:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:23:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:23:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:24:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [53bfeb0c761957d498688af7ec47e14c0815c5d0309fa32d385429d6e6ba7445] <==
	I0127 03:02:59.638936       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 03:02:59.690777       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 03:02:59.696767       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 03:02:59.716877       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 03:02:59.717883       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-717075_4c90a009-b343-4f63-8713-82afd91dcb78!
	I0127 03:02:59.722402       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ca6166fb-7879-4746-93b0-f03c2c35de3b", APIVersion:"v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-717075_4c90a009-b343-4f63-8713-82afd91dcb78 became leader
	I0127 03:02:59.819612       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-717075_4c90a009-b343-4f63-8713-82afd91dcb78!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-717075 -n default-k8s-diff-port-717075
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-717075 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-fthnn
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-717075 describe pod metrics-server-f79f97bbb-fthnn
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-717075 describe pod metrics-server-f79f97bbb-fthnn: exit status 1 (64.666913ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-fthnn" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-717075 describe pod metrics-server-f79f97bbb-fthnn: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (1614.96s)

                                                
                                    

Test pass (275/316)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.16
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.32.1/json-events 5.13
13 TestDownloadOnly/v1.32.1/preload-exists 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.07
18 TestDownloadOnly/v1.32.1/DeleteAll 0.14
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.63
22 TestOffline 65.95
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 210.39
29 TestAddons/serial/Volcano 39.62
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 8.5
35 TestAddons/parallel/Registry 16.91
36 TestAddons/parallel/Ingress 19.19
37 TestAddons/parallel/InspektorGadget 11.81
38 TestAddons/parallel/MetricsServer 6.85
40 TestAddons/parallel/CSI 47.24
41 TestAddons/parallel/Headlamp 25.91
42 TestAddons/parallel/CloudSpanner 5.78
43 TestAddons/parallel/LocalPath 61.52
44 TestAddons/parallel/NvidiaDevicePlugin 6.51
45 TestAddons/parallel/Yakd 10.86
47 TestAddons/StoppedEnableDisable 91.27
48 TestCertOptions 102.46
49 TestCertExpiration 354.2
51 TestForceSystemdFlag 81.21
52 TestForceSystemdEnv 72.23
54 TestKVMDriverInstallOrUpdate 1.55
58 TestErrorSpam/setup 45.44
59 TestErrorSpam/start 0.37
60 TestErrorSpam/status 0.77
61 TestErrorSpam/pause 1.58
62 TestErrorSpam/unpause 1.72
63 TestErrorSpam/stop 5.18
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 57.46
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 43.43
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.05
75 TestFunctional/serial/CacheCmd/cache/add_local 0.97
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.55
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
83 TestFunctional/serial/ExtraConfig 44.5
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.46
86 TestFunctional/serial/LogsFileCmd 1.46
87 TestFunctional/serial/InvalidService 4.46
89 TestFunctional/parallel/ConfigCmd 0.42
90 TestFunctional/parallel/DashboardCmd 14.44
91 TestFunctional/parallel/DryRun 0.39
92 TestFunctional/parallel/InternationalLanguage 0.17
93 TestFunctional/parallel/StatusCmd 0.94
97 TestFunctional/parallel/ServiceCmdConnect 8.6
98 TestFunctional/parallel/AddonsCmd 0.18
99 TestFunctional/parallel/PersistentVolumeClaim 37.98
101 TestFunctional/parallel/SSHCmd 0.5
102 TestFunctional/parallel/CpCmd 1.46
103 TestFunctional/parallel/MySQL 28.63
104 TestFunctional/parallel/FileSync 0.23
105 TestFunctional/parallel/CertSync 1.34
109 TestFunctional/parallel/NodeLabels 0.1
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.55
113 TestFunctional/parallel/License 0.16
114 TestFunctional/parallel/ServiceCmd/DeployApp 9.24
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
125 TestFunctional/parallel/MountCmd/any-port 7.53
126 TestFunctional/parallel/ProfileCmd/profile_list 0.34
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
128 TestFunctional/parallel/MountCmd/specific-port 1.8
129 TestFunctional/parallel/ServiceCmd/List 0.31
130 TestFunctional/parallel/ServiceCmd/JSONOutput 0.28
131 TestFunctional/parallel/ServiceCmd/HTTPS 0.35
132 TestFunctional/parallel/ServiceCmd/Format 0.31
133 TestFunctional/parallel/ServiceCmd/URL 0.34
134 TestFunctional/parallel/MountCmd/VerifyCleanup 1.83
135 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
136 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
137 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
138 TestFunctional/parallel/ImageCommands/ImageListYaml 0.43
139 TestFunctional/parallel/ImageCommands/ImageBuild 3.98
140 TestFunctional/parallel/ImageCommands/Setup 0.43
141 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.71
142 TestFunctional/parallel/Version/short 0.06
143 TestFunctional/parallel/Version/components 0.56
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.27
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.63
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.42
147 TestFunctional/parallel/ImageCommands/ImageRemove 1.08
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.81
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.42
150 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
151 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
152 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 196.6
160 TestMultiControlPlane/serial/DeployApp 5.51
161 TestMultiControlPlane/serial/PingHostFromPods 1.24
162 TestMultiControlPlane/serial/AddWorkerNode 57.56
163 TestMultiControlPlane/serial/NodeLabels 0.07
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.88
165 TestMultiControlPlane/serial/CopyFile 13.42
166 TestMultiControlPlane/serial/StopSecondaryNode 91.66
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.67
168 TestMultiControlPlane/serial/RestartSecondaryNode 43.8
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.91
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 492.47
171 TestMultiControlPlane/serial/DeleteSecondaryNode 6.12
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.65
173 TestMultiControlPlane/serial/StopCluster 272.55
174 TestMultiControlPlane/serial/RestartCluster 165.05
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.63
176 TestMultiControlPlane/serial/AddSecondaryNode 75.19
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.88
181 TestJSONOutput/start/Command 86.67
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.72
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.64
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 6.62
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.21
209 TestMainNoArgs 0.05
210 TestMinikubeProfile 92.83
213 TestMountStart/serial/StartWithMountFirst 29.89
214 TestMountStart/serial/VerifyMountFirst 0.39
215 TestMountStart/serial/StartWithMountSecond 28.33
216 TestMountStart/serial/VerifyMountSecond 0.39
217 TestMountStart/serial/DeleteFirst 0.71
218 TestMountStart/serial/VerifyMountPostDelete 0.39
219 TestMountStart/serial/Stop 1.34
220 TestMountStart/serial/RestartStopped 22.73
221 TestMountStart/serial/VerifyMountPostStop 0.39
224 TestMultiNode/serial/FreshStart2Nodes 111.31
225 TestMultiNode/serial/DeployApp2Nodes 4.5
226 TestMultiNode/serial/PingHostFrom2Pods 0.85
227 TestMultiNode/serial/AddNode 50.65
228 TestMultiNode/serial/MultiNodeLabels 0.06
229 TestMultiNode/serial/ProfileList 0.58
230 TestMultiNode/serial/CopyFile 7.38
231 TestMultiNode/serial/StopNode 2.27
232 TestMultiNode/serial/StartAfterStop 35.68
233 TestMultiNode/serial/RestartKeepsNodes 317.88
234 TestMultiNode/serial/DeleteNode 2.03
235 TestMultiNode/serial/StopMultiNode 182.09
236 TestMultiNode/serial/RestartMultiNode 96.12
237 TestMultiNode/serial/ValidateNameConflict 47.46
242 TestPreload 260.24
244 TestScheduledStopUnix 120.08
248 TestRunningBinaryUpgrade 151.74
250 TestKubernetesUpgrade 162.6
253 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
257 TestNoKubernetes/serial/StartWithK8s 96.99
262 TestNetworkPlugins/group/false 3.21
266 TestNoKubernetes/serial/StartWithStopK8s 78.64
267 TestNoKubernetes/serial/Start 57.77
268 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
269 TestNoKubernetes/serial/ProfileList 1.95
270 TestNoKubernetes/serial/Stop 2.32
271 TestNoKubernetes/serial/StartNoArgs 23.87
272 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
273 TestStoppedBinaryUpgrade/Setup 0.43
274 TestStoppedBinaryUpgrade/Upgrade 167.06
283 TestPause/serial/Start 65.27
284 TestNetworkPlugins/group/auto/Start 86.79
285 TestNetworkPlugins/group/calico/Start 100.22
286 TestPause/serial/SecondStartNoReconfiguration 85.88
287 TestNetworkPlugins/group/auto/KubeletFlags 0.26
288 TestNetworkPlugins/group/auto/NetCatPod 9.27
289 TestStoppedBinaryUpgrade/MinikubeLogs 1.35
290 TestNetworkPlugins/group/custom-flannel/Start 69.4
291 TestNetworkPlugins/group/auto/DNS 0.26
292 TestNetworkPlugins/group/auto/Localhost 0.13
293 TestNetworkPlugins/group/auto/HairPin 0.14
294 TestNetworkPlugins/group/kindnet/Start 72.09
295 TestPause/serial/Pause 0.72
296 TestPause/serial/VerifyStatus 0.27
297 TestPause/serial/Unpause 0.71
298 TestPause/serial/PauseAgain 0.79
299 TestPause/serial/DeletePaused 1.01
300 TestPause/serial/VerifyDeletedResources 0.46
301 TestNetworkPlugins/group/flannel/Start 105.84
302 TestNetworkPlugins/group/calico/ControllerPod 6.01
303 TestNetworkPlugins/group/calico/KubeletFlags 0.24
304 TestNetworkPlugins/group/calico/NetCatPod 11.26
305 TestNetworkPlugins/group/calico/DNS 0.17
306 TestNetworkPlugins/group/calico/Localhost 0.14
307 TestNetworkPlugins/group/calico/HairPin 0.15
308 TestNetworkPlugins/group/enable-default-cni/Start 99.56
309 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
310 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.24
311 TestNetworkPlugins/group/custom-flannel/DNS 0.18
312 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
313 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
314 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
315 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
316 TestNetworkPlugins/group/kindnet/NetCatPod 10.25
317 TestNetworkPlugins/group/bridge/Start 72.51
318 TestNetworkPlugins/group/kindnet/DNS 0.17
319 TestNetworkPlugins/group/kindnet/Localhost 0.13
320 TestNetworkPlugins/group/kindnet/HairPin 0.14
322 TestStartStop/group/old-k8s-version/serial/FirstStart 167.28
323 TestNetworkPlugins/group/flannel/ControllerPod 6.01
324 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
325 TestNetworkPlugins/group/flannel/NetCatPod 10.24
326 TestNetworkPlugins/group/flannel/DNS 0.2
327 TestNetworkPlugins/group/flannel/Localhost 0.15
328 TestNetworkPlugins/group/flannel/HairPin 0.14
330 TestStartStop/group/no-preload/serial/FirstStart 112.49
331 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
332 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.29
333 TestNetworkPlugins/group/bridge/KubeletFlags 0.36
334 TestNetworkPlugins/group/bridge/NetCatPod 12.28
335 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
336 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
337 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
338 TestNetworkPlugins/group/bridge/DNS 0.15
339 TestNetworkPlugins/group/bridge/Localhost 0.12
340 TestNetworkPlugins/group/bridge/HairPin 0.12
342 TestStartStop/group/embed-certs/serial/FirstStart 86.83
344 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 85.73
345 TestStartStop/group/no-preload/serial/DeployApp 9.33
346 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.17
347 TestStartStop/group/no-preload/serial/Stop 90.85
348 TestStartStop/group/embed-certs/serial/DeployApp 9.3
349 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.27
350 TestStartStop/group/old-k8s-version/serial/DeployApp 9.43
351 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.09
352 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.09
353 TestStartStop/group/embed-certs/serial/Stop 91.77
354 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.1
355 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.93
356 TestStartStop/group/old-k8s-version/serial/Stop 91.07
357 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
359 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.25
360 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
363 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
364 TestStartStop/group/old-k8s-version/serial/SecondStart 194.86
365 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
366 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
367 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
368 TestStartStop/group/old-k8s-version/serial/Pause 2.65
370 TestStartStop/group/newest-cni/serial/FirstStart 51.45
371 TestStartStop/group/newest-cni/serial/DeployApp 0
372 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.02
373 TestStartStop/group/newest-cni/serial/Stop 2.33
374 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
375 TestStartStop/group/newest-cni/serial/SecondStart 35.91
376 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
377 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
378 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
379 TestStartStop/group/newest-cni/serial/Pause 2.89
x
+
TestDownloadOnly/v1.20.0/json-events (7.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-614748 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-614748 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (7.157027581s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0127 01:43:09.103110 1064439 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0127 01:43:09.103247 1064439 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-614748
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-614748: exit status 85 (65.229841ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-614748 | jenkins | v1.35.0 | 27 Jan 25 01:43 UTC |          |
	|         | -p download-only-614748        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 01:43:01
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 01:43:01.989833 1064451 out.go:345] Setting OutFile to fd 1 ...
	I0127 01:43:01.989938 1064451 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 01:43:01.989947 1064451 out.go:358] Setting ErrFile to fd 2...
	I0127 01:43:01.989951 1064451 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 01:43:01.990174 1064451 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-1057178/.minikube/bin
	W0127 01:43:01.990317 1064451 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20316-1057178/.minikube/config/config.json: open /home/jenkins/minikube-integration/20316-1057178/.minikube/config/config.json: no such file or directory
	I0127 01:43:01.990928 1064451 out.go:352] Setting JSON to true
	I0127 01:43:01.992084 1064451 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8729,"bootTime":1737933453,"procs":287,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 01:43:01.992204 1064451 start.go:139] virtualization: kvm guest
	I0127 01:43:01.994668 1064451 out.go:97] [download-only-614748] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 01:43:01.994854 1064451 notify.go:220] Checking for updates...
	W0127 01:43:01.994877 1064451 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/preloaded-tarball: no such file or directory
	I0127 01:43:01.996207 1064451 out.go:169] MINIKUBE_LOCATION=20316
	I0127 01:43:01.997589 1064451 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 01:43:01.998855 1064451 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20316-1057178/kubeconfig
	I0127 01:43:01.999984 1064451 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-1057178/.minikube
	I0127 01:43:02.001274 1064451 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0127 01:43:02.003774 1064451 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0127 01:43:02.004076 1064451 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 01:43:02.038049 1064451 out.go:97] Using the kvm2 driver based on user configuration
	I0127 01:43:02.038085 1064451 start.go:297] selected driver: kvm2
	I0127 01:43:02.038091 1064451 start.go:901] validating driver "kvm2" against <nil>
	I0127 01:43:02.038445 1064451 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 01:43:02.038532 1064451 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20316-1057178/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 01:43:02.054910 1064451 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 01:43:02.054981 1064451 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 01:43:02.055879 1064451 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0127 01:43:02.056170 1064451 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 01:43:02.056214 1064451 cni.go:84] Creating CNI manager for ""
	I0127 01:43:02.056290 1064451 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 01:43:02.056309 1064451 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 01:43:02.056380 1064451 start.go:340] cluster config:
	{Name:download-only-614748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-614748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 01:43:02.056631 1064451 iso.go:125] acquiring lock: {Name:mkd30bc9d11f9170e89ad95ce7ba25fa6d1e04f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 01:43:02.058493 1064451 out.go:97] Downloading VM boot image ...
	I0127 01:43:02.058532 1064451 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 01:43:04.700197 1064451 out.go:97] Starting "download-only-614748" primary control-plane node in "download-only-614748" cluster
	I0127 01:43:04.700251 1064451 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0127 01:43:04.726627 1064451 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0127 01:43:04.726665 1064451 cache.go:56] Caching tarball of preloaded images
	I0127 01:43:04.726828 1064451 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0127 01:43:04.728414 1064451 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0127 01:43:04.728428 1064451 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0127 01:43:04.760671 1064451 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-614748 host does not exist
	  To start a cluster, run: "minikube start -p download-only-614748"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-614748
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (5.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-304745 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-304745 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (5.13129852s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (5.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0127 01:43:14.578042 1064439 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0127 01:43:14.578121 1064439 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-304745
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-304745: exit status 85 (64.700943ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-614748 | jenkins | v1.35.0 | 27 Jan 25 01:43 UTC |                     |
	|         | -p download-only-614748        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 27 Jan 25 01:43 UTC | 27 Jan 25 01:43 UTC |
	| delete  | -p download-only-614748        | download-only-614748 | jenkins | v1.35.0 | 27 Jan 25 01:43 UTC | 27 Jan 25 01:43 UTC |
	| start   | -o=json --download-only        | download-only-304745 | jenkins | v1.35.0 | 27 Jan 25 01:43 UTC |                     |
	|         | -p download-only-304745        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 01:43:09
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 01:43:09.489177 1064638 out.go:345] Setting OutFile to fd 1 ...
	I0127 01:43:09.489647 1064638 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 01:43:09.489666 1064638 out.go:358] Setting ErrFile to fd 2...
	I0127 01:43:09.489675 1064638 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 01:43:09.490133 1064638 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-1057178/.minikube/bin
	I0127 01:43:09.491126 1064638 out.go:352] Setting JSON to true
	I0127 01:43:09.492206 1064638 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8736,"bootTime":1737933453,"procs":285,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 01:43:09.492323 1064638 start.go:139] virtualization: kvm guest
	I0127 01:43:09.494214 1064638 out.go:97] [download-only-304745] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 01:43:09.494342 1064638 notify.go:220] Checking for updates...
	I0127 01:43:09.495670 1064638 out.go:169] MINIKUBE_LOCATION=20316
	I0127 01:43:09.496896 1064638 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 01:43:09.498313 1064638 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20316-1057178/kubeconfig
	I0127 01:43:09.499861 1064638 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-1057178/.minikube
	I0127 01:43:09.501254 1064638 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0127 01:43:09.503742 1064638 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0127 01:43:09.503993 1064638 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 01:43:09.535951 1064638 out.go:97] Using the kvm2 driver based on user configuration
	I0127 01:43:09.535979 1064638 start.go:297] selected driver: kvm2
	I0127 01:43:09.535986 1064638 start.go:901] validating driver "kvm2" against <nil>
	I0127 01:43:09.536360 1064638 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 01:43:09.536455 1064638 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20316-1057178/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 01:43:09.551602 1064638 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 01:43:09.551648 1064638 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 01:43:09.552119 1064638 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0127 01:43:09.552257 1064638 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 01:43:09.552285 1064638 cni.go:84] Creating CNI manager for ""
	I0127 01:43:09.552332 1064638 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 01:43:09.552340 1064638 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 01:43:09.552386 1064638 start.go:340] cluster config:
	{Name:download-only-304745 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:download-only-304745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 01:43:09.552500 1064638 iso.go:125] acquiring lock: {Name:mkd30bc9d11f9170e89ad95ce7ba25fa6d1e04f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 01:43:09.554200 1064638 out.go:97] Starting "download-only-304745" primary control-plane node in "download-only-304745" cluster
	I0127 01:43:09.554244 1064638 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 01:43:09.582512 1064638 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
	I0127 01:43:09.582552 1064638 cache.go:56] Caching tarball of preloaded images
	I0127 01:43:09.582757 1064638 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 01:43:09.584715 1064638 out.go:97] Downloading Kubernetes v1.32.1 preload ...
	I0127 01:43:09.584748 1064638 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 ...
	I0127 01:43:09.611170 1064638 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4?checksum=md5:8f020f9a34bd60feec225b8429b992a8 -> /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
	I0127 01:43:12.152101 1064638 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 ...
	I0127 01:43:12.152197 1064638 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 ...
	I0127 01:43:12.917593 1064638 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
	I0127 01:43:12.917954 1064638 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/download-only-304745/config.json ...
	I0127 01:43:12.917986 1064638 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/download-only-304745/config.json: {Name:mk8b88285dfe87bf299b7d39f23e3b58b68425a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 01:43:12.918204 1064638 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 01:43:12.918406 1064638 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/linux/amd64/v1.32.1/kubectl
	
	
	* The control-plane node download-only-304745 host does not exist
	  To start a cluster, run: "minikube start -p download-only-304745"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-304745
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
I0127 01:43:15.191457 1064439 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-947709 --alsologtostderr --binary-mirror http://127.0.0.1:38219 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-947709" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-947709
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestOffline (65.95s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-004386 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-004386 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m5.068101441s)
helpers_test.go:175: Cleaning up "offline-containerd-004386" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-004386
--- PASS: TestOffline (65.95s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-994590
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-994590: exit status 85 (62.663787ms)

                                                
                                                
-- stdout --
	* Profile "addons-994590" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-994590"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-994590
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-994590: exit status 85 (62.00272ms)

                                                
                                                
-- stdout --
	* Profile "addons-994590" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-994590"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (210.39s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-994590 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-994590 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m30.393315811s)
--- PASS: TestAddons/Setup (210.39s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.62s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:823: volcano-controller stabilized in 23.926905ms
addons_test.go:807: volcano-scheduler stabilized in 24.022068ms
addons_test.go:815: volcano-admission stabilized in 24.274515ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-7ff7cd6989-n5pd6" [be2121a5-2aa1-4baf-bef5-6d8dc75bf030] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.00477507s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-57676bd54c-4p59d" [322c3f90-c5e9-4059-8197-c598bf2bd619] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.006811384s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-77df547cdf-rmbpp" [5be7615e-ffc2-4aae-8072-709a1c6685d3] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.005301852s
addons_test.go:842: (dbg) Run:  kubectl --context addons-994590 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-994590 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-994590 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [2656101a-ebf8-4e4e-a929-ac319884fd6e] Pending
helpers_test.go:344: "test-job-nginx-0" [2656101a-ebf8-4e4e-a929-ac319884fd6e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [2656101a-ebf8-4e4e-a929-ac319884fd6e] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.004356246s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-994590 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-994590 addons disable volcano --alsologtostderr -v=1: (11.202812593s)
--- PASS: TestAddons/serial/Volcano (39.62s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-994590 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-994590 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.5s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-994590 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-994590 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [526d3f00-0bc8-4238-84bc-6c1da4aa64cf] Pending
helpers_test.go:344: "busybox" [526d3f00-0bc8-4238-84bc-6c1da4aa64cf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [526d3f00-0bc8-4238-84bc-6c1da4aa64cf] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.00397657s
addons_test.go:633: (dbg) Run:  kubectl --context addons-994590 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-994590 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-994590 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.50s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 5.343016ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-8wvws" [e74fb630-169f-4c8a-9104-2c775d47be7f] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00340745s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-5pr4g" [c0d2bc37-8672-4da5-bee8-23699601611a] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004501462s
addons_test.go:331: (dbg) Run:  kubectl --context addons-994590 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-994590 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-994590 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.080734922s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-994590 ip
2025/01/27 01:47:59 [DEBUG] GET http://192.168.39.130:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-994590 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.91s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-994590 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-994590 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-994590 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [64630636-15f2-4e01-bb23-748c079a5617] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [64630636-15f2-4e01-bb23-748c079a5617] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003319269s
I0127 01:48:18.827169 1064439 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-994590 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-994590 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-994590 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.130
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-994590 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-994590 addons disable ingress-dns --alsologtostderr -v=1: (1.123354666s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-994590 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-994590 addons disable ingress --alsologtostderr -v=1: (7.878073108s)
--- PASS: TestAddons/parallel/Ingress (19.19s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.81s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-4sxnc" [5ebbe512-3427-442f-9582-40f9f9fae5a3] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.009865786s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-994590 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-994590 addons disable inspektor-gadget --alsologtostderr -v=1: (5.801994023s)
--- PASS: TestAddons/parallel/InspektorGadget (11.81s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.85s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 21.646398ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-ws4mt" [4ec9db0e-35c5-45a8-9c8d-4132f75a0dd3] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.012785889s
addons_test.go:402: (dbg) Run:  kubectl --context addons-994590 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-994590 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.85s)

                                                
                                    
x
+
TestAddons/parallel/CSI (47.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0127 01:48:06.586565 1064439 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0127 01:48:06.591975 1064439 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0127 01:48:06.592001 1064439 kapi.go:107] duration metric: took 5.466143ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 5.474522ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-994590 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-994590 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-994590 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-994590 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-994590 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-994590 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-994590 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-994590 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-994590 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-994590 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [0f0e78e9-0bc2-42cf-af82-fa97d0ec9c34] Pending
helpers_test.go:344: "task-pv-pod" [0f0e78e9-0bc2-42cf-af82-fa97d0ec9c34] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [0f0e78e9-0bc2-42cf-af82-fa97d0ec9c34] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.00643356s
addons_test.go:511: (dbg) Run:  kubectl --context addons-994590 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-994590 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-994590 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-994590 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-994590 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-994590 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-994590 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-994590 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-994590 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-994590 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-994590 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-994590 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-994590 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-994590 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-994590 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-994590 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-994590 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-994590 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-994590 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-994590 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-994590 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-994590 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [49bef2dd-c0d8-4c6b-aeaa-7379d0b203ea] Pending
helpers_test.go:344: "task-pv-pod-restore" [49bef2dd-c0d8-4c6b-aeaa-7379d0b203ea] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004540441s
addons_test.go:553: (dbg) Run:  kubectl --context addons-994590 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-994590 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-994590 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-994590 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-994590 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-994590 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.791313922s)
--- PASS: TestAddons/parallel/CSI (47.24s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (25.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-994590 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-994590 --alsologtostderr -v=1: (1.151862782s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-69d78d796f-ghxjl" [070156a5-334d-4a23-aaa9-d2d3b898b2f7] Pending
helpers_test.go:344: "headlamp-69d78d796f-ghxjl" [070156a5-334d-4a23-aaa9-d2d3b898b2f7] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-69d78d796f-ghxjl" [070156a5-334d-4a23-aaa9-d2d3b898b2f7] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 19.004446251s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-994590 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-994590 addons disable headlamp --alsologtostderr -v=1: (5.752311416s)
--- PASS: TestAddons/parallel/Headlamp (25.91s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.78s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-98dc8" [09e1ac99-d10d-4769-a74c-8ff89bc31340] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.01381739s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-994590 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.78s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (61.52s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-994590 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-994590 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-994590 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-994590 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-994590 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-994590 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-994590 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-994590 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-994590 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-994590 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-994590 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-994590 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-994590 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-994590 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [c0a5467e-c154-4f08-8d4e-e305ce3531fc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [c0a5467e-c154-4f08-8d4e-e305ce3531fc] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [c0a5467e-c154-4f08-8d4e-e305ce3531fc] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.006215688s
addons_test.go:906: (dbg) Run:  kubectl --context addons-994590 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-994590 ssh "cat /opt/local-path-provisioner/pvc-ef3ab97f-79e5-41f1-a30b-cd98ff66b721_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-994590 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-994590 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-994590 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-994590 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.631532839s)
--- PASS: TestAddons/parallel/LocalPath (61.52s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-mw79v" [d2b8bccc-fbbf-4e38-81d0-6bef6b8acf9f] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004719558s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-994590 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-m8v55" [bced7eb6-eb54-425b-b72e-cb60dac7063f] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004282302s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-994590 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-994590 addons disable yakd --alsologtostderr -v=1: (5.858020162s)
--- PASS: TestAddons/parallel/Yakd (10.86s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.27s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-994590
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-994590: (1m30.972064188s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-994590
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-994590
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-994590
--- PASS: TestAddons/StoppedEnableDisable (91.27s)

                                                
                                    
x
+
TestCertOptions (102.46s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-455791 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-455791 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m41.142328636s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-455791 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-455791 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-455791 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-455791" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-455791
--- PASS: TestCertOptions (102.46s)

                                                
                                    
x
+
TestCertExpiration (354.2s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-591446 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
E0127 02:44:00.955790 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/functional-249952/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-591446 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m51.637304783s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-591446 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
E0127 02:49:00.955689 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/functional-249952/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-591446 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (1m1.509777217s)
helpers_test.go:175: Cleaning up "cert-expiration-591446" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-591446
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-591446: (1.052935973s)
--- PASS: TestCertExpiration (354.20s)

                                                
                                    
x
+
TestForceSystemdFlag (81.21s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-670487 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-670487 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m19.96993906s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-670487 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-670487" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-670487
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-670487: (1.029691639s)
--- PASS: TestForceSystemdFlag (81.21s)

                                                
                                    
x
+
TestForceSystemdEnv (72.23s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-064299 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-064299 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m11.031377804s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-064299 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-064299" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-064299
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-064299: (1.000148338s)
--- PASS: TestForceSystemdEnv (72.23s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.55s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0127 02:46:15.724708 1064439 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 02:46:15.724868 1064439 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0127 02:46:15.757785 1064439 install.go:62] docker-machine-driver-kvm2: exit status 1
W0127 02:46:15.758323 1064439 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0127 02:46:15.758386 1064439 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate643495000/001/docker-machine-driver-kvm2
I0127 02:46:15.876961 1064439 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate643495000/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0] Decompressors:map[bz2:0xc000124530 gz:0xc000124538 tar:0xc000124260 tar.bz2:0xc000124270 tar.gz:0xc000124290 tar.xz:0xc0001244d0 tar.zst:0xc0001244e0 tbz2:0xc000124270 tgz:0xc000124290 txz:0xc0001244d0 tzst:0xc0001244e0 xz:0xc000124780 zip:0xc000124790 zst:0xc000124788] Getters:map[file:0xc000812f30 http:0xc000c901e0 https:0xc000c90230] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0127 02:46:15.877013 1064439 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate643495000/001/docker-machine-driver-kvm2
I0127 02:46:16.586864 1064439 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 02:46:16.586958 1064439 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0127 02:46:16.618210 1064439 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0127 02:46:16.618244 1064439 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0127 02:46:16.618327 1064439 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0127 02:46:16.618368 1064439 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate643495000/002/docker-machine-driver-kvm2
I0127 02:46:16.640881 1064439 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate643495000/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0] Decompressors:map[bz2:0xc000124530 gz:0xc000124538 tar:0xc000124260 tar.bz2:0xc000124270 tar.gz:0xc000124290 tar.xz:0xc0001244d0 tar.zst:0xc0001244e0 tbz2:0xc000124270 tgz:0xc000124290 txz:0xc0001244d0 tzst:0xc0001244e0 xz:0xc000124780 zip:0xc000124790 zst:0xc000124788] Getters:map[file:0xc0019dba00 http:0xc0019d59f0 https:0xc0019d5a40] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0127 02:46:16.640972 1064439 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate643495000/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.55s)

                                                
                                    
x
+
TestErrorSpam/setup (45.44s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-425859 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-425859 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-425859 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-425859 --driver=kvm2  --container-runtime=containerd: (45.438092749s)
--- PASS: TestErrorSpam/setup (45.44s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425859 --log_dir /tmp/nospam-425859 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425859 --log_dir /tmp/nospam-425859 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425859 --log_dir /tmp/nospam-425859 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425859 --log_dir /tmp/nospam-425859 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425859 --log_dir /tmp/nospam-425859 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425859 --log_dir /tmp/nospam-425859 status
--- PASS: TestErrorSpam/status (0.77s)

                                                
                                    
x
+
TestErrorSpam/pause (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425859 --log_dir /tmp/nospam-425859 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425859 --log_dir /tmp/nospam-425859 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425859 --log_dir /tmp/nospam-425859 pause
--- PASS: TestErrorSpam/pause (1.58s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425859 --log_dir /tmp/nospam-425859 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425859 --log_dir /tmp/nospam-425859 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425859 --log_dir /tmp/nospam-425859 unpause
--- PASS: TestErrorSpam/unpause (1.72s)

                                                
                                    
x
+
TestErrorSpam/stop (5.18s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425859 --log_dir /tmp/nospam-425859 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-425859 --log_dir /tmp/nospam-425859 stop: (1.546675764s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425859 --log_dir /tmp/nospam-425859 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-425859 --log_dir /tmp/nospam-425859 stop: (2.046467027s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425859 --log_dir /tmp/nospam-425859 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-425859 --log_dir /tmp/nospam-425859 stop: (1.585815907s)
--- PASS: TestErrorSpam/stop (5.18s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20316-1057178/.minikube/files/etc/test/nested/copy/1064439/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (57.46s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-249952 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0127 01:51:46.285230 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
E0127 01:51:46.291708 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
E0127 01:51:46.303103 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
E0127 01:51:46.324505 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
E0127 01:51:46.365926 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
E0127 01:51:46.447367 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
E0127 01:51:46.608969 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
E0127 01:51:46.930680 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
E0127 01:51:47.572347 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
E0127 01:51:48.854006 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
E0127 01:51:51.415834 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
E0127 01:51:56.537205 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
E0127 01:52:06.779547 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-249952 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (57.460606249s)
--- PASS: TestFunctional/serial/StartWithProxy (57.46s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (43.43s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0127 01:52:18.981950 1064439 config.go:182] Loaded profile config "functional-249952": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-249952 --alsologtostderr -v=8
E0127 01:52:27.261178 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-249952 --alsologtostderr -v=8: (43.427400597s)
functional_test.go:663: soft start took 43.428254719s for "functional-249952" cluster.
I0127 01:53:02.409784 1064439 config.go:182] Loaded profile config "functional-249952": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (43.43s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-249952 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-249952 cache add registry.k8s.io/pause:3.1: (1.034546859s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-249952 cache add registry.k8s.io/pause:3.3: (1.069108589s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.97s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-249952 /tmp/TestFunctionalserialCacheCmdcacheadd_local1391721792/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 cache add minikube-local-cache-test:functional-249952
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 cache delete minikube-local-cache-test:functional-249952
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-249952
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.97s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-249952 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (219.262171ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 ssh sudo crictl inspecti registry.k8s.io/pause:latest
E0127 01:53:08.222564 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 kubectl -- --context functional-249952 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-249952 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.5s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-249952 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-249952 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.500026216s)
functional_test.go:761: restart took 44.500167114s for "functional-249952" cluster.
I0127 01:53:53.259836 1064439 config.go:182] Loaded profile config "functional-249952": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (44.50s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-249952 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-249952 logs: (1.455300497s)
--- PASS: TestFunctional/serial/LogsCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 logs --file /tmp/TestFunctionalserialLogsFileCmd185077228/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-249952 logs --file /tmp/TestFunctionalserialLogsFileCmd185077228/001/logs.txt: (1.462705967s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.46s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-249952 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-249952
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-249952: exit status 115 (306.919088ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.209:30366 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-249952 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.46s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-249952 config get cpus: exit status 14 (79.133295ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-249952 config get cpus: exit status 14 (65.171503ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-249952 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-249952 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1072520: os: process already finished
E0127 01:54:30.143963 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/DashboardCmd (14.44s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-249952 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-249952 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (160.774321ms)

                                                
                                                
-- stdout --
	* [functional-249952] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20316
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20316-1057178/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-1057178/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 01:54:11.610802 1072002 out.go:345] Setting OutFile to fd 1 ...
	I0127 01:54:11.610981 1072002 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 01:54:11.610994 1072002 out.go:358] Setting ErrFile to fd 2...
	I0127 01:54:11.611001 1072002 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 01:54:11.611283 1072002 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-1057178/.minikube/bin
	I0127 01:54:11.612040 1072002 out.go:352] Setting JSON to false
	I0127 01:54:11.613515 1072002 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":9399,"bootTime":1737933453,"procs":244,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 01:54:11.613666 1072002 start.go:139] virtualization: kvm guest
	I0127 01:54:11.615821 1072002 out.go:177] * [functional-249952] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 01:54:11.617516 1072002 notify.go:220] Checking for updates...
	I0127 01:54:11.617542 1072002 out.go:177]   - MINIKUBE_LOCATION=20316
	I0127 01:54:11.618898 1072002 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 01:54:11.620072 1072002 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20316-1057178/kubeconfig
	I0127 01:54:11.621311 1072002 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-1057178/.minikube
	I0127 01:54:11.622533 1072002 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 01:54:11.623814 1072002 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 01:54:11.625717 1072002 config.go:182] Loaded profile config "functional-249952": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 01:54:11.626306 1072002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 01:54:11.626400 1072002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:54:11.644393 1072002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34571
	I0127 01:54:11.644795 1072002 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:54:11.645356 1072002 main.go:141] libmachine: Using API Version  1
	I0127 01:54:11.645381 1072002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:54:11.645737 1072002 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:54:11.645943 1072002 main.go:141] libmachine: (functional-249952) Calling .DriverName
	I0127 01:54:11.646198 1072002 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 01:54:11.646508 1072002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 01:54:11.646555 1072002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:54:11.662608 1072002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36321
	I0127 01:54:11.663125 1072002 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:54:11.663645 1072002 main.go:141] libmachine: Using API Version  1
	I0127 01:54:11.663675 1072002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:54:11.664004 1072002 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:54:11.664273 1072002 main.go:141] libmachine: (functional-249952) Calling .DriverName
	I0127 01:54:11.702069 1072002 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 01:54:11.703143 1072002 start.go:297] selected driver: kvm2
	I0127 01:54:11.703174 1072002 start.go:901] validating driver "kvm2" against &{Name:functional-249952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-249952 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.209 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 01:54:11.703321 1072002 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 01:54:11.705248 1072002 out.go:201] 
	W0127 01:54:11.706426 1072002 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0127 01:54:11.707508 1072002 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-249952 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-249952 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-249952 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (166.029161ms)

                                                
                                                
-- stdout --
	* [functional-249952] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20316
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20316-1057178/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-1057178/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 01:54:11.969497 1072176 out.go:345] Setting OutFile to fd 1 ...
	I0127 01:54:11.969760 1072176 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 01:54:11.969771 1072176 out.go:358] Setting ErrFile to fd 2...
	I0127 01:54:11.969776 1072176 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 01:54:11.970093 1072176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-1057178/.minikube/bin
	I0127 01:54:11.970875 1072176 out.go:352] Setting JSON to false
	I0127 01:54:11.972441 1072176 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":9399,"bootTime":1737933453,"procs":256,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 01:54:11.972641 1072176 start.go:139] virtualization: kvm guest
	I0127 01:54:11.976921 1072176 out.go:177] * [functional-249952] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0127 01:54:11.978382 1072176 out.go:177]   - MINIKUBE_LOCATION=20316
	I0127 01:54:11.978449 1072176 notify.go:220] Checking for updates...
	I0127 01:54:11.980730 1072176 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 01:54:11.982325 1072176 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20316-1057178/kubeconfig
	I0127 01:54:11.983608 1072176 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-1057178/.minikube
	I0127 01:54:11.985972 1072176 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 01:54:11.987372 1072176 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 01:54:11.989129 1072176 config.go:182] Loaded profile config "functional-249952": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 01:54:11.989589 1072176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 01:54:11.989647 1072176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:54:12.008827 1072176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45371
	I0127 01:54:12.009307 1072176 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:54:12.009928 1072176 main.go:141] libmachine: Using API Version  1
	I0127 01:54:12.009955 1072176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:54:12.010448 1072176 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:54:12.010684 1072176 main.go:141] libmachine: (functional-249952) Calling .DriverName
	I0127 01:54:12.011006 1072176 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 01:54:12.011519 1072176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 01:54:12.011575 1072176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:54:12.029650 1072176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39051
	I0127 01:54:12.030129 1072176 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:54:12.030990 1072176 main.go:141] libmachine: Using API Version  1
	I0127 01:54:12.031018 1072176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:54:12.031500 1072176 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:54:12.031738 1072176 main.go:141] libmachine: (functional-249952) Calling .DriverName
	I0127 01:54:12.070144 1072176 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0127 01:54:12.071373 1072176 start.go:297] selected driver: kvm2
	I0127 01:54:12.071392 1072176 start.go:901] validating driver "kvm2" against &{Name:functional-249952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-249952 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.209 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 01:54:12.071528 1072176 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 01:54:12.073523 1072176 out.go:201] 
	W0127 01:54:12.074806 1072176 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0127 01:54:12.076074 1072176 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-249952 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-249952 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-t6ntl" [f3d71a0c-5da3-4061-9a51-ded581c287cc] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-t6ntl" [f3d71a0c-5da3-4061-9a51-ded581c287cc] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.006761818s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.209:30510
functional_test.go:1675: http://192.168.39.209:30510: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-t6ntl

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.209:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.209:30510
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (37.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [1cad1db7-07ed-49a4-8ada-e36c2f642d02] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00456983s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-249952 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-249952 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-249952 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-249952 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3f23ac48-ecdd-4e01-bfb3-2b3d1cd87181] Pending
helpers_test.go:344: "sp-pod" [3f23ac48-ecdd-4e01-bfb3-2b3d1cd87181] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3f23ac48-ecdd-4e01-bfb3-2b3d1cd87181] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.005349713s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-249952 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-249952 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-249952 delete -f testdata/storage-provisioner/pod.yaml: (2.073339993s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-249952 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c7a365d2-381e-4284-831c-6ff6e6ec97c6] Pending
helpers_test.go:344: "sp-pod" [c7a365d2-381e-4284-831c-6ff6e6ec97c6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c7a365d2-381e-4284-831c-6ff6e6ec97c6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.016464264s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-249952 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (37.98s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 ssh -n functional-249952 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 cp functional-249952:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd23199132/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 ssh -n functional-249952 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 ssh -n functional-249952 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-249952 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-v6ft9" [d69e126e-73a3-4095-8867-576ce4dc0b9a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-v6ft9" [d69e126e-73a3-4095-8867-576ce4dc0b9a] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.053823817s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-249952 exec mysql-58ccfd96bb-v6ft9 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-249952 exec mysql-58ccfd96bb-v6ft9 -- mysql -ppassword -e "show databases;": exit status 1 (174.69192ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0127 01:54:39.188126 1064439 retry.go:31] will retry after 1.185672662s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-249952 exec mysql-58ccfd96bb-v6ft9 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-249952 exec mysql-58ccfd96bb-v6ft9 -- mysql -ppassword -e "show databases;": exit status 1 (122.060642ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0127 01:54:40.496641 1064439 retry.go:31] will retry after 1.519741904s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-249952 exec mysql-58ccfd96bb-v6ft9 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-249952 exec mysql-58ccfd96bb-v6ft9 -- mysql -ppassword -e "show databases;": exit status 1 (117.903916ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0127 01:54:42.135338 1064439 retry.go:31] will retry after 2.112696635s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-249952 exec mysql-58ccfd96bb-v6ft9 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.63s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1064439/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 ssh "sudo cat /etc/test/nested/copy/1064439/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1064439.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 ssh "sudo cat /etc/ssl/certs/1064439.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1064439.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 ssh "sudo cat /usr/share/ca-certificates/1064439.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/10644392.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 ssh "sudo cat /etc/ssl/certs/10644392.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/10644392.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 ssh "sudo cat /usr/share/ca-certificates/10644392.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-249952 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-249952 ssh "sudo systemctl is-active docker": exit status 1 (282.026392ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-249952 ssh "sudo systemctl is-active crio": exit status 1 (266.315571ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-249952 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-249952 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-d6p52" [97517caa-865c-48f6-9384-52c1f821e0c3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-d6p52" [97517caa-865c-48f6-9384-52c1f821e0c3] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.004547375s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-249952 /tmp/TestFunctionalparallelMountCmdany-port671113875/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1737942842358173377" to /tmp/TestFunctionalparallelMountCmdany-port671113875/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1737942842358173377" to /tmp/TestFunctionalparallelMountCmdany-port671113875/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1737942842358173377" to /tmp/TestFunctionalparallelMountCmdany-port671113875/001/test-1737942842358173377
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-249952 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (245.343949ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 01:54:02.603886 1064439 retry.go:31] will retry after 366.581493ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 27 01:54 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 27 01:54 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 27 01:54 test-1737942842358173377
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 ssh cat /mount-9p/test-1737942842358173377
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-249952 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [1b3ecf04-8ca8-4ddf-b820-c21942311e68] Pending
helpers_test.go:344: "busybox-mount" [1b3ecf04-8ca8-4ddf-b820-c21942311e68] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [1b3ecf04-8ca8-4ddf-b820-c21942311e68] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [1b3ecf04-8ca8-4ddf-b820-c21942311e68] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003729229s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-249952 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-249952 /tmp/TestFunctionalparallelMountCmdany-port671113875/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "284.925459ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "53.636232ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "330.341456ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "56.276079ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-249952 /tmp/TestFunctionalparallelMountCmdspecific-port3958771155/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-249952 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (236.702875ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 01:54:10.123827 1064439 retry.go:31] will retry after 478.618431ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-249952 /tmp/TestFunctionalparallelMountCmdspecific-port3958771155/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-249952 ssh "sudo umount -f /mount-9p": exit status 1 (220.01091ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-249952 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-249952 /tmp/TestFunctionalparallelMountCmdspecific-port3958771155/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 service list -o json
functional_test.go:1494: Took "282.170676ms" to run "out/minikube-linux-amd64 -p functional-249952 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.209:31032
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.209:31032
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-249952 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3344245730/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-249952 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3344245730/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-249952 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3344245730/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-249952 ssh "findmnt -T" /mount1: exit status 1 (316.848826ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 01:54:12.007113 1064439 retry.go:31] will retry after 746.174014ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-249952 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-249952 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3344245730/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-249952 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3344245730/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-249952 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3344245730/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-249952 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-249952
docker.io/kindest/kindnetd:v20241108-5c6d2daf
docker.io/kicbase/echo-server:functional-249952
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-249952 image ls --format short --alsologtostderr:
I0127 01:54:20.699313 1073069 out.go:345] Setting OutFile to fd 1 ...
I0127 01:54:20.699602 1073069 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 01:54:20.699612 1073069 out.go:358] Setting ErrFile to fd 2...
I0127 01:54:20.699617 1073069 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 01:54:20.699819 1073069 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-1057178/.minikube/bin
I0127 01:54:20.700435 1073069 config.go:182] Loaded profile config "functional-249952": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 01:54:20.700546 1073069 config.go:182] Loaded profile config "functional-249952": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 01:54:20.700900 1073069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 01:54:20.700980 1073069 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 01:54:20.716960 1073069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45629
I0127 01:54:20.717625 1073069 main.go:141] libmachine: () Calling .GetVersion
I0127 01:54:20.718292 1073069 main.go:141] libmachine: Using API Version  1
I0127 01:54:20.718316 1073069 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 01:54:20.718727 1073069 main.go:141] libmachine: () Calling .GetMachineName
I0127 01:54:20.718943 1073069 main.go:141] libmachine: (functional-249952) Calling .GetState
I0127 01:54:20.721065 1073069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 01:54:20.721121 1073069 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 01:54:20.737166 1073069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44443
I0127 01:54:20.737734 1073069 main.go:141] libmachine: () Calling .GetVersion
I0127 01:54:20.738336 1073069 main.go:141] libmachine: Using API Version  1
I0127 01:54:20.738371 1073069 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 01:54:20.738773 1073069 main.go:141] libmachine: () Calling .GetMachineName
I0127 01:54:20.739001 1073069 main.go:141] libmachine: (functional-249952) Calling .DriverName
I0127 01:54:20.739274 1073069 ssh_runner.go:195] Run: systemctl --version
I0127 01:54:20.739302 1073069 main.go:141] libmachine: (functional-249952) Calling .GetSSHHostname
I0127 01:54:20.742540 1073069 main.go:141] libmachine: (functional-249952) DBG | domain functional-249952 has defined MAC address 52:54:00:a8:5d:22 in network mk-functional-249952
I0127 01:54:20.743018 1073069 main.go:141] libmachine: (functional-249952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:5d:22", ip: ""} in network mk-functional-249952: {Iface:virbr1 ExpiryTime:2025-01-27 02:51:36 +0000 UTC Type:0 Mac:52:54:00:a8:5d:22 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-249952 Clientid:01:52:54:00:a8:5d:22}
I0127 01:54:20.743057 1073069 main.go:141] libmachine: (functional-249952) DBG | domain functional-249952 has defined IP address 192.168.39.209 and MAC address 52:54:00:a8:5d:22 in network mk-functional-249952
I0127 01:54:20.743180 1073069 main.go:141] libmachine: (functional-249952) Calling .GetSSHPort
I0127 01:54:20.743381 1073069 main.go:141] libmachine: (functional-249952) Calling .GetSSHKeyPath
I0127 01:54:20.743527 1073069 main.go:141] libmachine: (functional-249952) Calling .GetSSHUsername
I0127 01:54:20.743647 1073069 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/functional-249952/id_rsa Username:docker}
I0127 01:54:20.838430 1073069 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 01:54:20.940106 1073069 main.go:141] libmachine: Making call to close driver server
I0127 01:54:20.940130 1073069 main.go:141] libmachine: (functional-249952) Calling .Close
I0127 01:54:20.940425 1073069 main.go:141] libmachine: Successfully made call to close driver server
I0127 01:54:20.940448 1073069 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 01:54:20.940431 1073069 main.go:141] libmachine: (functional-249952) DBG | Closing plugin on server side
I0127 01:54:20.940458 1073069 main.go:141] libmachine: Making call to close driver server
I0127 01:54:20.940474 1073069 main.go:141] libmachine: (functional-249952) Calling .Close
I0127 01:54:20.940690 1073069 main.go:141] libmachine: (functional-249952) DBG | Closing plugin on server side
I0127 01:54:20.940756 1073069 main.go:141] libmachine: Successfully made call to close driver server
I0127 01:54:20.940788 1073069 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-249952 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-scheduler              | v1.32.1            | sha256:2b0d65 | 20.7MB |
| docker.io/kicbase/echo-server               | functional-249952  | sha256:9056ab | 2.37MB |
| docker.io/library/minikube-local-cache-test | functional-249952  | sha256:602cbd | 990B   |
| docker.io/library/nginx                     | latest             | sha256:9bea9f | 72.1MB |
| localhost/my-image                          | functional-249952  | sha256:73b71e | 775kB  |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:c69fa2 | 18.6MB |
| registry.k8s.io/etcd                        | 3.5.16-0           | sha256:a9e7e6 | 57.7MB |
| registry.k8s.io/kube-apiserver              | v1.32.1            | sha256:95c0bd | 28.7MB |
| registry.k8s.io/kube-proxy                  | v1.32.1            | sha256:e29f9c | 30.9MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| docker.io/kindest/kindnetd                  | v20241108-5c6d2daf | sha256:50415e | 38.6MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/kube-controller-manager     | v1.32.1            | sha256:019ee1 | 26.3MB |
| registry.k8s.io/pause                       | 3.10               | sha256:873ed7 | 320kB  |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-249952 image ls --format table --alsologtostderr:
I0127 01:54:25.647170 1073272 out.go:345] Setting OutFile to fd 1 ...
I0127 01:54:25.647433 1073272 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 01:54:25.647444 1073272 out.go:358] Setting ErrFile to fd 2...
I0127 01:54:25.647448 1073272 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 01:54:25.647616 1073272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-1057178/.minikube/bin
I0127 01:54:25.648394 1073272 config.go:182] Loaded profile config "functional-249952": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 01:54:25.648536 1073272 config.go:182] Loaded profile config "functional-249952": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 01:54:25.648989 1073272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 01:54:25.649040 1073272 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 01:54:25.664238 1073272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45403
I0127 01:54:25.664847 1073272 main.go:141] libmachine: () Calling .GetVersion
I0127 01:54:25.665606 1073272 main.go:141] libmachine: Using API Version  1
I0127 01:54:25.665633 1073272 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 01:54:25.666063 1073272 main.go:141] libmachine: () Calling .GetMachineName
I0127 01:54:25.666247 1073272 main.go:141] libmachine: (functional-249952) Calling .GetState
I0127 01:54:25.668232 1073272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 01:54:25.668289 1073272 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 01:54:25.683266 1073272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43221
I0127 01:54:25.683838 1073272 main.go:141] libmachine: () Calling .GetVersion
I0127 01:54:25.684464 1073272 main.go:141] libmachine: Using API Version  1
I0127 01:54:25.684489 1073272 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 01:54:25.684799 1073272 main.go:141] libmachine: () Calling .GetMachineName
I0127 01:54:25.685048 1073272 main.go:141] libmachine: (functional-249952) Calling .DriverName
I0127 01:54:25.685293 1073272 ssh_runner.go:195] Run: systemctl --version
I0127 01:54:25.685329 1073272 main.go:141] libmachine: (functional-249952) Calling .GetSSHHostname
I0127 01:54:25.688059 1073272 main.go:141] libmachine: (functional-249952) DBG | domain functional-249952 has defined MAC address 52:54:00:a8:5d:22 in network mk-functional-249952
I0127 01:54:25.688479 1073272 main.go:141] libmachine: (functional-249952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:5d:22", ip: ""} in network mk-functional-249952: {Iface:virbr1 ExpiryTime:2025-01-27 02:51:36 +0000 UTC Type:0 Mac:52:54:00:a8:5d:22 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-249952 Clientid:01:52:54:00:a8:5d:22}
I0127 01:54:25.688514 1073272 main.go:141] libmachine: (functional-249952) DBG | domain functional-249952 has defined IP address 192.168.39.209 and MAC address 52:54:00:a8:5d:22 in network mk-functional-249952
I0127 01:54:25.688640 1073272 main.go:141] libmachine: (functional-249952) Calling .GetSSHPort
I0127 01:54:25.688796 1073272 main.go:141] libmachine: (functional-249952) Calling .GetSSHKeyPath
I0127 01:54:25.688926 1073272 main.go:141] libmachine: (functional-249952) Calling .GetSSHUsername
I0127 01:54:25.689061 1073272 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/functional-249952/id_rsa Username:docker}
I0127 01:54:25.773298 1073272 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 01:54:25.831791 1073272 main.go:141] libmachine: Making call to close driver server
I0127 01:54:25.831817 1073272 main.go:141] libmachine: (functional-249952) Calling .Close
I0127 01:54:25.832181 1073272 main.go:141] libmachine: Successfully made call to close driver server
I0127 01:54:25.832212 1073272 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 01:54:25.832229 1073272 main.go:141] libmachine: Making call to close driver server
I0127 01:54:25.832228 1073272 main.go:141] libmachine: (functional-249952) DBG | Closing plugin on server side
I0127 01:54:25.832240 1073272 main.go:141] libmachine: (functional-249952) Calling .Close
I0127 01:54:25.832556 1073272 main.go:141] libmachine: (functional-249952) DBG | Closing plugin on server side
I0127 01:54:25.832582 1073272 main.go:141] libmachine: Successfully made call to close driver server
I0127 01:54:25.832598 1073272 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-249952 image ls --format json --alsologtostderr:
[{"id":"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"20657536"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:73b71e6adcae93ad06b946bf0fd0a8c8b561d8a4582d798c358d71c4323e1655","repoDigests":[],"repoTags":["localhost/my-image:functional-249952"],"size":"774889"},{"id":"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"18562039"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256
:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"28671624"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoT
ags":[],"size":"19746404"},{"id":"sha256:9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1","repoDigests":["docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a"],"repoTags":["docker.io/library/nginx:latest"],"size":"72080558"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a","repoDigests":["registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"30908485"},{"id":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTag
s":["registry.k8s.io/pause:3.10"],"size":"320368"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-249952"],"size":"2372971"},{"id":"sha256:602cbd866907b9588b623cc03a1675e6b64caee7ff5c613a19a9daea9242c779","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-249952"],"size":"990"},{"id":"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"57680541"},{"id":"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"26258470"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104
e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e","repoDigests":["docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"38601118"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-249952 image ls --format json --alsologtostderr:
I0127 01:54:25.406620 1073248 out.go:345] Setting OutFile to fd 1 ...
I0127 01:54:25.406729 1073248 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 01:54:25.406737 1073248 out.go:358] Setting ErrFile to fd 2...
I0127 01:54:25.406741 1073248 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 01:54:25.406932 1073248 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-1057178/.minikube/bin
I0127 01:54:25.407519 1073248 config.go:182] Loaded profile config "functional-249952": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 01:54:25.407621 1073248 config.go:182] Loaded profile config "functional-249952": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 01:54:25.407966 1073248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 01:54:25.408015 1073248 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 01:54:25.423392 1073248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46747
I0127 01:54:25.423958 1073248 main.go:141] libmachine: () Calling .GetVersion
I0127 01:54:25.424730 1073248 main.go:141] libmachine: Using API Version  1
I0127 01:54:25.424764 1073248 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 01:54:25.425143 1073248 main.go:141] libmachine: () Calling .GetMachineName
I0127 01:54:25.425338 1073248 main.go:141] libmachine: (functional-249952) Calling .GetState
I0127 01:54:25.427140 1073248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 01:54:25.427189 1073248 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 01:54:25.443454 1073248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46175
I0127 01:54:25.443893 1073248 main.go:141] libmachine: () Calling .GetVersion
I0127 01:54:25.444408 1073248 main.go:141] libmachine: Using API Version  1
I0127 01:54:25.444435 1073248 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 01:54:25.444732 1073248 main.go:141] libmachine: () Calling .GetMachineName
I0127 01:54:25.444934 1073248 main.go:141] libmachine: (functional-249952) Calling .DriverName
I0127 01:54:25.445199 1073248 ssh_runner.go:195] Run: systemctl --version
I0127 01:54:25.445229 1073248 main.go:141] libmachine: (functional-249952) Calling .GetSSHHostname
I0127 01:54:25.448700 1073248 main.go:141] libmachine: (functional-249952) DBG | domain functional-249952 has defined MAC address 52:54:00:a8:5d:22 in network mk-functional-249952
I0127 01:54:25.449303 1073248 main.go:141] libmachine: (functional-249952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:5d:22", ip: ""} in network mk-functional-249952: {Iface:virbr1 ExpiryTime:2025-01-27 02:51:36 +0000 UTC Type:0 Mac:52:54:00:a8:5d:22 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-249952 Clientid:01:52:54:00:a8:5d:22}
I0127 01:54:25.449330 1073248 main.go:141] libmachine: (functional-249952) DBG | domain functional-249952 has defined IP address 192.168.39.209 and MAC address 52:54:00:a8:5d:22 in network mk-functional-249952
I0127 01:54:25.449543 1073248 main.go:141] libmachine: (functional-249952) Calling .GetSSHPort
I0127 01:54:25.449724 1073248 main.go:141] libmachine: (functional-249952) Calling .GetSSHKeyPath
I0127 01:54:25.449872 1073248 main.go:141] libmachine: (functional-249952) Calling .GetSSHUsername
I0127 01:54:25.450014 1073248 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/functional-249952/id_rsa Username:docker}
I0127 01:54:25.538207 1073248 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 01:54:25.588068 1073248 main.go:141] libmachine: Making call to close driver server
I0127 01:54:25.588087 1073248 main.go:141] libmachine: (functional-249952) Calling .Close
I0127 01:54:25.588377 1073248 main.go:141] libmachine: Successfully made call to close driver server
I0127 01:54:25.588415 1073248 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 01:54:25.588415 1073248 main.go:141] libmachine: (functional-249952) DBG | Closing plugin on server side
I0127 01:54:25.588424 1073248 main.go:141] libmachine: Making call to close driver server
I0127 01:54:25.588433 1073248 main.go:141] libmachine: (functional-249952) Calling .Close
I0127 01:54:25.588679 1073248 main.go:141] libmachine: Successfully made call to close driver server
I0127 01:54:25.588694 1073248 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 01:54:25.588699 1073248 main.go:141] libmachine: (functional-249952) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-249952 image ls --format yaml --alsologtostderr:
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "57680541"
- id: sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "28671624"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "30908485"
- id: sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "20657536"
- id: sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "320368"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-249952
size: "2372971"
- id: sha256:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e
repoDigests:
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "38601118"
- id: sha256:602cbd866907b9588b623cc03a1675e6b64caee7ff5c613a19a9daea9242c779
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-249952
size: "990"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1
repoDigests:
- docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a
repoTags:
- docker.io/library/nginx:latest
size: "72080558"
- id: sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "18562039"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "26258470"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-249952 image ls --format yaml --alsologtostderr:
I0127 01:54:21.013806 1073105 out.go:345] Setting OutFile to fd 1 ...
I0127 01:54:21.013949 1073105 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 01:54:21.013962 1073105 out.go:358] Setting ErrFile to fd 2...
I0127 01:54:21.013967 1073105 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 01:54:21.014206 1073105 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-1057178/.minikube/bin
I0127 01:54:21.014874 1073105 config.go:182] Loaded profile config "functional-249952": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 01:54:21.014993 1073105 config.go:182] Loaded profile config "functional-249952": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 01:54:21.015370 1073105 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 01:54:21.015449 1073105 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 01:54:21.036974 1073105 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37083
I0127 01:54:21.037558 1073105 main.go:141] libmachine: () Calling .GetVersion
I0127 01:54:21.038350 1073105 main.go:141] libmachine: Using API Version  1
I0127 01:54:21.038389 1073105 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 01:54:21.039887 1073105 main.go:141] libmachine: () Calling .GetMachineName
I0127 01:54:21.040153 1073105 main.go:141] libmachine: (functional-249952) Calling .GetState
I0127 01:54:21.042922 1073105 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 01:54:21.042975 1073105 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 01:54:21.071270 1073105 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46609
I0127 01:54:21.071823 1073105 main.go:141] libmachine: () Calling .GetVersion
I0127 01:54:21.072467 1073105 main.go:141] libmachine: Using API Version  1
I0127 01:54:21.072502 1073105 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 01:54:21.072818 1073105 main.go:141] libmachine: () Calling .GetMachineName
I0127 01:54:21.073095 1073105 main.go:141] libmachine: (functional-249952) Calling .DriverName
I0127 01:54:21.073286 1073105 ssh_runner.go:195] Run: systemctl --version
I0127 01:54:21.073312 1073105 main.go:141] libmachine: (functional-249952) Calling .GetSSHHostname
I0127 01:54:21.077981 1073105 main.go:141] libmachine: (functional-249952) DBG | domain functional-249952 has defined MAC address 52:54:00:a8:5d:22 in network mk-functional-249952
I0127 01:54:21.078407 1073105 main.go:141] libmachine: (functional-249952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:5d:22", ip: ""} in network mk-functional-249952: {Iface:virbr1 ExpiryTime:2025-01-27 02:51:36 +0000 UTC Type:0 Mac:52:54:00:a8:5d:22 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-249952 Clientid:01:52:54:00:a8:5d:22}
I0127 01:54:21.078435 1073105 main.go:141] libmachine: (functional-249952) DBG | domain functional-249952 has defined IP address 192.168.39.209 and MAC address 52:54:00:a8:5d:22 in network mk-functional-249952
I0127 01:54:21.078732 1073105 main.go:141] libmachine: (functional-249952) Calling .GetSSHPort
I0127 01:54:21.078952 1073105 main.go:141] libmachine: (functional-249952) Calling .GetSSHKeyPath
I0127 01:54:21.079112 1073105 main.go:141] libmachine: (functional-249952) Calling .GetSSHUsername
I0127 01:54:21.079277 1073105 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/functional-249952/id_rsa Username:docker}
I0127 01:54:21.205635 1073105 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 01:54:21.285841 1073105 main.go:141] libmachine: Making call to close driver server
I0127 01:54:21.285861 1073105 main.go:141] libmachine: (functional-249952) Calling .Close
I0127 01:54:21.286183 1073105 main.go:141] libmachine: Successfully made call to close driver server
I0127 01:54:21.286208 1073105 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 01:54:21.286217 1073105 main.go:141] libmachine: Making call to close driver server
I0127 01:54:21.286224 1073105 main.go:141] libmachine: (functional-249952) Calling .Close
I0127 01:54:21.286478 1073105 main.go:141] libmachine: Successfully made call to close driver server
I0127 01:54:21.286497 1073105 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 01:54:21.286519 1073105 main.go:141] libmachine: (functional-249952) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-249952 ssh pgrep buildkitd: exit status 1 (220.738835ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 image build -t localhost/my-image:functional-249952 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-249952 image build -t localhost/my-image:functional-249952 testdata/build --alsologtostderr: (3.472985093s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-249952 image build -t localhost/my-image:functional-249952 testdata/build --alsologtostderr:
I0127 01:54:21.650712 1073191 out.go:345] Setting OutFile to fd 1 ...
I0127 01:54:21.650994 1073191 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 01:54:21.651005 1073191 out.go:358] Setting ErrFile to fd 2...
I0127 01:54:21.651009 1073191 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 01:54:21.651282 1073191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-1057178/.minikube/bin
I0127 01:54:21.652134 1073191 config.go:182] Loaded profile config "functional-249952": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 01:54:21.652778 1073191 config.go:182] Loaded profile config "functional-249952": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 01:54:21.653214 1073191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 01:54:21.653265 1073191 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 01:54:21.669596 1073191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38599
I0127 01:54:21.670274 1073191 main.go:141] libmachine: () Calling .GetVersion
I0127 01:54:21.670950 1073191 main.go:141] libmachine: Using API Version  1
I0127 01:54:21.670979 1073191 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 01:54:21.671423 1073191 main.go:141] libmachine: () Calling .GetMachineName
I0127 01:54:21.671687 1073191 main.go:141] libmachine: (functional-249952) Calling .GetState
I0127 01:54:21.673921 1073191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 01:54:21.673970 1073191 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 01:54:21.689903 1073191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39801
I0127 01:54:21.690378 1073191 main.go:141] libmachine: () Calling .GetVersion
I0127 01:54:21.690868 1073191 main.go:141] libmachine: Using API Version  1
I0127 01:54:21.690894 1073191 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 01:54:21.691236 1073191 main.go:141] libmachine: () Calling .GetMachineName
I0127 01:54:21.691453 1073191 main.go:141] libmachine: (functional-249952) Calling .DriverName
I0127 01:54:21.691704 1073191 ssh_runner.go:195] Run: systemctl --version
I0127 01:54:21.691733 1073191 main.go:141] libmachine: (functional-249952) Calling .GetSSHHostname
I0127 01:54:21.694886 1073191 main.go:141] libmachine: (functional-249952) DBG | domain functional-249952 has defined MAC address 52:54:00:a8:5d:22 in network mk-functional-249952
I0127 01:54:21.695331 1073191 main.go:141] libmachine: (functional-249952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:5d:22", ip: ""} in network mk-functional-249952: {Iface:virbr1 ExpiryTime:2025-01-27 02:51:36 +0000 UTC Type:0 Mac:52:54:00:a8:5d:22 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-249952 Clientid:01:52:54:00:a8:5d:22}
I0127 01:54:21.695363 1073191 main.go:141] libmachine: (functional-249952) DBG | domain functional-249952 has defined IP address 192.168.39.209 and MAC address 52:54:00:a8:5d:22 in network mk-functional-249952
I0127 01:54:21.695574 1073191 main.go:141] libmachine: (functional-249952) Calling .GetSSHPort
I0127 01:54:21.695783 1073191 main.go:141] libmachine: (functional-249952) Calling .GetSSHKeyPath
I0127 01:54:21.695923 1073191 main.go:141] libmachine: (functional-249952) Calling .GetSSHUsername
I0127 01:54:21.696053 1073191 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/functional-249952/id_rsa Username:docker}
I0127 01:54:21.781698 1073191 build_images.go:161] Building image from path: /tmp/build.2503293644.tar
I0127 01:54:21.781778 1073191 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0127 01:54:21.797478 1073191 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2503293644.tar
I0127 01:54:21.808291 1073191 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2503293644.tar: stat -c "%s %y" /var/lib/minikube/build/build.2503293644.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2503293644.tar': No such file or directory
I0127 01:54:21.808334 1073191 ssh_runner.go:362] scp /tmp/build.2503293644.tar --> /var/lib/minikube/build/build.2503293644.tar (3072 bytes)
I0127 01:54:21.845597 1073191 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2503293644
I0127 01:54:21.858634 1073191 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2503293644 -xf /var/lib/minikube/build/build.2503293644.tar
I0127 01:54:21.888545 1073191 containerd.go:394] Building image: /var/lib/minikube/build/build.2503293644
I0127 01:54:21.888639 1073191 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2503293644 --local dockerfile=/var/lib/minikube/build/build.2503293644 --output type=image,name=localhost/my-image:functional-249952
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.3s done
#8 exporting manifest sha256:3bdb0c4065243ec359156a23a9188ad1db02a4e9617fb6152721caeeb3bc822e
#8 exporting manifest sha256:3bdb0c4065243ec359156a23a9188ad1db02a4e9617fb6152721caeeb3bc822e 0.0s done
#8 exporting config sha256:73b71e6adcae93ad06b946bf0fd0a8c8b561d8a4582d798c358d71c4323e1655 0.0s done
#8 naming to localhost/my-image:functional-249952 0.0s done
#8 DONE 0.4s
I0127 01:54:25.013660 1073191 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2503293644 --local dockerfile=/var/lib/minikube/build/build.2503293644 --output type=image,name=localhost/my-image:functional-249952: (3.124969108s)
I0127 01:54:25.013776 1073191 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2503293644
I0127 01:54:25.037065 1073191 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2503293644.tar
I0127 01:54:25.061373 1073191 build_images.go:217] Built localhost/my-image:functional-249952 from /tmp/build.2503293644.tar
I0127 01:54:25.061410 1073191 build_images.go:133] succeeded building to: functional-249952
I0127 01:54:25.061416 1073191 build_images.go:134] failed building to: 
I0127 01:54:25.061450 1073191 main.go:141] libmachine: Making call to close driver server
I0127 01:54:25.061471 1073191 main.go:141] libmachine: (functional-249952) Calling .Close
I0127 01:54:25.061806 1073191 main.go:141] libmachine: (functional-249952) DBG | Closing plugin on server side
I0127 01:54:25.061825 1073191 main.go:141] libmachine: Successfully made call to close driver server
I0127 01:54:25.061846 1073191 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 01:54:25.061861 1073191 main.go:141] libmachine: Making call to close driver server
I0127 01:54:25.061869 1073191 main.go:141] libmachine: (functional-249952) Calling .Close
I0127 01:54:25.062137 1073191 main.go:141] libmachine: Successfully made call to close driver server
I0127 01:54:25.062146 1073191 main.go:141] libmachine: (functional-249952) DBG | Closing plugin on server side
I0127 01:54:25.062203 1073191 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-249952
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 image load --daemon kicbase/echo-server:functional-249952 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-249952 image load --daemon kicbase/echo-server:functional-249952 --alsologtostderr: (1.444593866s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 image load --daemon kicbase/echo-server:functional-249952 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p functional-249952 image load --daemon kicbase/echo-server:functional-249952 --alsologtostderr: (1.042679441s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-249952
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 image load --daemon kicbase/echo-server:functional-249952 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-amd64 -p functional-249952 image load --daemon kicbase/echo-server:functional-249952 --alsologtostderr: (1.170140715s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 image save kicbase/echo-server:functional-249952 /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 image rm kicbase/echo-server:functional-249952 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-249952
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 image save --daemon kicbase/echo-server:functional-249952 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-249952
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-249952 update-context --alsologtostderr -v=2
2025/01/27 01:54:25 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-249952
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-249952
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-249952
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (196.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-345229 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0127 01:56:46.279952 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
E0127 01:57:13.986081 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-345229 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (3m15.914871335s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (196.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-345229 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-345229 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-345229 -- rollout status deployment/busybox: (3.294741265s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-345229 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-345229 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-345229 -- exec busybox-58667487b6-9mrlt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-345229 -- exec busybox-58667487b6-hjmp2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-345229 -- exec busybox-58667487b6-k2qbv -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-345229 -- exec busybox-58667487b6-9mrlt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-345229 -- exec busybox-58667487b6-hjmp2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-345229 -- exec busybox-58667487b6-k2qbv -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-345229 -- exec busybox-58667487b6-9mrlt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-345229 -- exec busybox-58667487b6-hjmp2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-345229 -- exec busybox-58667487b6-k2qbv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-345229 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-345229 -- exec busybox-58667487b6-9mrlt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-345229 -- exec busybox-58667487b6-9mrlt -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-345229 -- exec busybox-58667487b6-hjmp2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-345229 -- exec busybox-58667487b6-hjmp2 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-345229 -- exec busybox-58667487b6-k2qbv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-345229 -- exec busybox-58667487b6-k2qbv -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-345229 -v=7 --alsologtostderr
E0127 01:59:00.956433 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/functional-249952/client.crt: no such file or directory" logger="UnhandledError"
E0127 01:59:00.962869 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/functional-249952/client.crt: no such file or directory" logger="UnhandledError"
E0127 01:59:00.974364 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/functional-249952/client.crt: no such file or directory" logger="UnhandledError"
E0127 01:59:00.995874 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/functional-249952/client.crt: no such file or directory" logger="UnhandledError"
E0127 01:59:01.037395 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/functional-249952/client.crt: no such file or directory" logger="UnhandledError"
E0127 01:59:01.119664 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/functional-249952/client.crt: no such file or directory" logger="UnhandledError"
E0127 01:59:01.281055 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/functional-249952/client.crt: no such file or directory" logger="UnhandledError"
E0127 01:59:01.602407 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/functional-249952/client.crt: no such file or directory" logger="UnhandledError"
E0127 01:59:02.244040 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/functional-249952/client.crt: no such file or directory" logger="UnhandledError"
E0127 01:59:03.525572 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/functional-249952/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-345229 -v=7 --alsologtostderr: (56.682324064s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 status -v=7 --alsologtostderr
E0127 01:59:06.087292 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/functional-249952/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (57.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-345229 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 cp testdata/cp-test.txt ha-345229:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 ssh -n ha-345229 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 cp ha-345229:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1097629348/001/cp-test_ha-345229.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 ssh -n ha-345229 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 cp ha-345229:/home/docker/cp-test.txt ha-345229-m02:/home/docker/cp-test_ha-345229_ha-345229-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 ssh -n ha-345229 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 ssh -n ha-345229-m02 "sudo cat /home/docker/cp-test_ha-345229_ha-345229-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 cp ha-345229:/home/docker/cp-test.txt ha-345229-m03:/home/docker/cp-test_ha-345229_ha-345229-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 ssh -n ha-345229 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 ssh -n ha-345229-m03 "sudo cat /home/docker/cp-test_ha-345229_ha-345229-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 cp ha-345229:/home/docker/cp-test.txt ha-345229-m04:/home/docker/cp-test_ha-345229_ha-345229-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 ssh -n ha-345229 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 ssh -n ha-345229-m04 "sudo cat /home/docker/cp-test_ha-345229_ha-345229-m04.txt"
E0127 01:59:11.209197 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/functional-249952/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 cp testdata/cp-test.txt ha-345229-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 ssh -n ha-345229-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 cp ha-345229-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1097629348/001/cp-test_ha-345229-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 ssh -n ha-345229-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 cp ha-345229-m02:/home/docker/cp-test.txt ha-345229:/home/docker/cp-test_ha-345229-m02_ha-345229.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 ssh -n ha-345229-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 ssh -n ha-345229 "sudo cat /home/docker/cp-test_ha-345229-m02_ha-345229.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 cp ha-345229-m02:/home/docker/cp-test.txt ha-345229-m03:/home/docker/cp-test_ha-345229-m02_ha-345229-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 ssh -n ha-345229-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 ssh -n ha-345229-m03 "sudo cat /home/docker/cp-test_ha-345229-m02_ha-345229-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 cp ha-345229-m02:/home/docker/cp-test.txt ha-345229-m04:/home/docker/cp-test_ha-345229-m02_ha-345229-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 ssh -n ha-345229-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 ssh -n ha-345229-m04 "sudo cat /home/docker/cp-test_ha-345229-m02_ha-345229-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 cp testdata/cp-test.txt ha-345229-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 ssh -n ha-345229-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 cp ha-345229-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1097629348/001/cp-test_ha-345229-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 ssh -n ha-345229-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 cp ha-345229-m03:/home/docker/cp-test.txt ha-345229:/home/docker/cp-test_ha-345229-m03_ha-345229.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 ssh -n ha-345229-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 ssh -n ha-345229 "sudo cat /home/docker/cp-test_ha-345229-m03_ha-345229.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 cp ha-345229-m03:/home/docker/cp-test.txt ha-345229-m02:/home/docker/cp-test_ha-345229-m03_ha-345229-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 ssh -n ha-345229-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 ssh -n ha-345229-m02 "sudo cat /home/docker/cp-test_ha-345229-m03_ha-345229-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 cp ha-345229-m03:/home/docker/cp-test.txt ha-345229-m04:/home/docker/cp-test_ha-345229-m03_ha-345229-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 ssh -n ha-345229-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 ssh -n ha-345229-m04 "sudo cat /home/docker/cp-test_ha-345229-m03_ha-345229-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 cp testdata/cp-test.txt ha-345229-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 ssh -n ha-345229-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 cp ha-345229-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1097629348/001/cp-test_ha-345229-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 ssh -n ha-345229-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 cp ha-345229-m04:/home/docker/cp-test.txt ha-345229:/home/docker/cp-test_ha-345229-m04_ha-345229.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 ssh -n ha-345229-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 ssh -n ha-345229 "sudo cat /home/docker/cp-test_ha-345229-m04_ha-345229.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 cp ha-345229-m04:/home/docker/cp-test.txt ha-345229-m02:/home/docker/cp-test_ha-345229-m04_ha-345229-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 ssh -n ha-345229-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 ssh -n ha-345229-m02 "sudo cat /home/docker/cp-test_ha-345229-m04_ha-345229-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 cp ha-345229-m04:/home/docker/cp-test.txt ha-345229-m03:/home/docker/cp-test_ha-345229-m04_ha-345229-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 ssh -n ha-345229-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 ssh -n ha-345229-m03 "sudo cat /home/docker/cp-test_ha-345229-m04_ha-345229-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 node stop m02 -v=7 --alsologtostderr
E0127 01:59:21.451269 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/functional-249952/client.crt: no such file or directory" logger="UnhandledError"
E0127 01:59:41.932816 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/functional-249952/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:00:22.894308 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/functional-249952/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-345229 node stop m02 -v=7 --alsologtostderr: (1m30.991185129s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-345229 status -v=7 --alsologtostderr: exit status 7 (665.308631ms)

                                                
                                                
-- stdout --
	ha-345229
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-345229-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-345229-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-345229-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 02:00:51.773871 1078016 out.go:345] Setting OutFile to fd 1 ...
	I0127 02:00:51.774001 1078016 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:00:51.774011 1078016 out.go:358] Setting ErrFile to fd 2...
	I0127 02:00:51.774015 1078016 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:00:51.774218 1078016 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-1057178/.minikube/bin
	I0127 02:00:51.774396 1078016 out.go:352] Setting JSON to false
	I0127 02:00:51.774424 1078016 mustload.go:65] Loading cluster: ha-345229
	I0127 02:00:51.774550 1078016 notify.go:220] Checking for updates...
	I0127 02:00:51.774870 1078016 config.go:182] Loaded profile config "ha-345229": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 02:00:51.774893 1078016 status.go:174] checking status of ha-345229 ...
	I0127 02:00:51.775318 1078016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 02:00:51.775363 1078016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:00:51.795636 1078016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41909
	I0127 02:00:51.796092 1078016 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:00:51.796816 1078016 main.go:141] libmachine: Using API Version  1
	I0127 02:00:51.796860 1078016 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:00:51.797229 1078016 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:00:51.797437 1078016 main.go:141] libmachine: (ha-345229) Calling .GetState
	I0127 02:00:51.799219 1078016 status.go:371] ha-345229 host status = "Running" (err=<nil>)
	I0127 02:00:51.799241 1078016 host.go:66] Checking if "ha-345229" exists ...
	I0127 02:00:51.799699 1078016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 02:00:51.799758 1078016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:00:51.815756 1078016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38183
	I0127 02:00:51.816186 1078016 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:00:51.816676 1078016 main.go:141] libmachine: Using API Version  1
	I0127 02:00:51.816703 1078016 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:00:51.817063 1078016 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:00:51.817310 1078016 main.go:141] libmachine: (ha-345229) Calling .GetIP
	I0127 02:00:51.820270 1078016 main.go:141] libmachine: (ha-345229) DBG | domain ha-345229 has defined MAC address 52:54:00:28:60:ec in network mk-ha-345229
	I0127 02:00:51.820699 1078016 main.go:141] libmachine: (ha-345229) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:60:ec", ip: ""} in network mk-ha-345229: {Iface:virbr1 ExpiryTime:2025-01-27 02:55:01 +0000 UTC Type:0 Mac:52:54:00:28:60:ec Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:ha-345229 Clientid:01:52:54:00:28:60:ec}
	I0127 02:00:51.820735 1078016 main.go:141] libmachine: (ha-345229) DBG | domain ha-345229 has defined IP address 192.168.39.144 and MAC address 52:54:00:28:60:ec in network mk-ha-345229
	I0127 02:00:51.820875 1078016 host.go:66] Checking if "ha-345229" exists ...
	I0127 02:00:51.821219 1078016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 02:00:51.821259 1078016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:00:51.836420 1078016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44883
	I0127 02:00:51.837011 1078016 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:00:51.837537 1078016 main.go:141] libmachine: Using API Version  1
	I0127 02:00:51.837560 1078016 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:00:51.837888 1078016 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:00:51.838077 1078016 main.go:141] libmachine: (ha-345229) Calling .DriverName
	I0127 02:00:51.838281 1078016 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 02:00:51.838339 1078016 main.go:141] libmachine: (ha-345229) Calling .GetSSHHostname
	I0127 02:00:51.841312 1078016 main.go:141] libmachine: (ha-345229) DBG | domain ha-345229 has defined MAC address 52:54:00:28:60:ec in network mk-ha-345229
	I0127 02:00:51.841782 1078016 main.go:141] libmachine: (ha-345229) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:60:ec", ip: ""} in network mk-ha-345229: {Iface:virbr1 ExpiryTime:2025-01-27 02:55:01 +0000 UTC Type:0 Mac:52:54:00:28:60:ec Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:ha-345229 Clientid:01:52:54:00:28:60:ec}
	I0127 02:00:51.841812 1078016 main.go:141] libmachine: (ha-345229) DBG | domain ha-345229 has defined IP address 192.168.39.144 and MAC address 52:54:00:28:60:ec in network mk-ha-345229
	I0127 02:00:51.841990 1078016 main.go:141] libmachine: (ha-345229) Calling .GetSSHPort
	I0127 02:00:51.842197 1078016 main.go:141] libmachine: (ha-345229) Calling .GetSSHKeyPath
	I0127 02:00:51.842352 1078016 main.go:141] libmachine: (ha-345229) Calling .GetSSHUsername
	I0127 02:00:51.842491 1078016 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/ha-345229/id_rsa Username:docker}
	I0127 02:00:51.931981 1078016 ssh_runner.go:195] Run: systemctl --version
	I0127 02:00:51.939875 1078016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 02:00:51.958665 1078016 kubeconfig.go:125] found "ha-345229" server: "https://192.168.39.254:8443"
	I0127 02:00:51.958712 1078016 api_server.go:166] Checking apiserver status ...
	I0127 02:00:51.958750 1078016 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 02:00:51.977660 1078016 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1171/cgroup
	W0127 02:00:51.990849 1078016 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1171/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 02:00:51.990906 1078016 ssh_runner.go:195] Run: ls
	I0127 02:00:51.995868 1078016 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0127 02:00:52.001367 1078016 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0127 02:00:52.001393 1078016 status.go:463] ha-345229 apiserver status = Running (err=<nil>)
	I0127 02:00:52.001403 1078016 status.go:176] ha-345229 status: &{Name:ha-345229 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 02:00:52.001420 1078016 status.go:174] checking status of ha-345229-m02 ...
	I0127 02:00:52.001740 1078016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 02:00:52.001779 1078016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:00:52.017044 1078016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46163
	I0127 02:00:52.017587 1078016 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:00:52.018156 1078016 main.go:141] libmachine: Using API Version  1
	I0127 02:00:52.018181 1078016 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:00:52.018532 1078016 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:00:52.018742 1078016 main.go:141] libmachine: (ha-345229-m02) Calling .GetState
	I0127 02:00:52.020711 1078016 status.go:371] ha-345229-m02 host status = "Stopped" (err=<nil>)
	I0127 02:00:52.020729 1078016 status.go:384] host is not running, skipping remaining checks
	I0127 02:00:52.020738 1078016 status.go:176] ha-345229-m02 status: &{Name:ha-345229-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 02:00:52.020763 1078016 status.go:174] checking status of ha-345229-m03 ...
	I0127 02:00:52.021191 1078016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 02:00:52.021238 1078016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:00:52.036871 1078016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35351
	I0127 02:00:52.037431 1078016 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:00:52.038023 1078016 main.go:141] libmachine: Using API Version  1
	I0127 02:00:52.038048 1078016 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:00:52.038390 1078016 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:00:52.038593 1078016 main.go:141] libmachine: (ha-345229-m03) Calling .GetState
	I0127 02:00:52.040302 1078016 status.go:371] ha-345229-m03 host status = "Running" (err=<nil>)
	I0127 02:00:52.040323 1078016 host.go:66] Checking if "ha-345229-m03" exists ...
	I0127 02:00:52.040720 1078016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 02:00:52.040762 1078016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:00:52.057215 1078016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35623
	I0127 02:00:52.057646 1078016 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:00:52.058077 1078016 main.go:141] libmachine: Using API Version  1
	I0127 02:00:52.058099 1078016 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:00:52.058413 1078016 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:00:52.058629 1078016 main.go:141] libmachine: (ha-345229-m03) Calling .GetIP
	I0127 02:00:52.061180 1078016 main.go:141] libmachine: (ha-345229-m03) DBG | domain ha-345229-m03 has defined MAC address 52:54:00:0b:5c:1c in network mk-ha-345229
	I0127 02:00:52.061651 1078016 main.go:141] libmachine: (ha-345229-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:5c:1c", ip: ""} in network mk-ha-345229: {Iface:virbr1 ExpiryTime:2025-01-27 02:57:03 +0000 UTC Type:0 Mac:52:54:00:0b:5c:1c Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-345229-m03 Clientid:01:52:54:00:0b:5c:1c}
	I0127 02:00:52.061674 1078016 main.go:141] libmachine: (ha-345229-m03) DBG | domain ha-345229-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:0b:5c:1c in network mk-ha-345229
	I0127 02:00:52.061852 1078016 host.go:66] Checking if "ha-345229-m03" exists ...
	I0127 02:00:52.062157 1078016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 02:00:52.062204 1078016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:00:52.078304 1078016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35141
	I0127 02:00:52.078888 1078016 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:00:52.079484 1078016 main.go:141] libmachine: Using API Version  1
	I0127 02:00:52.079508 1078016 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:00:52.079841 1078016 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:00:52.080053 1078016 main.go:141] libmachine: (ha-345229-m03) Calling .DriverName
	I0127 02:00:52.080255 1078016 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 02:00:52.080283 1078016 main.go:141] libmachine: (ha-345229-m03) Calling .GetSSHHostname
	I0127 02:00:52.083250 1078016 main.go:141] libmachine: (ha-345229-m03) DBG | domain ha-345229-m03 has defined MAC address 52:54:00:0b:5c:1c in network mk-ha-345229
	I0127 02:00:52.083799 1078016 main.go:141] libmachine: (ha-345229-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:5c:1c", ip: ""} in network mk-ha-345229: {Iface:virbr1 ExpiryTime:2025-01-27 02:57:03 +0000 UTC Type:0 Mac:52:54:00:0b:5c:1c Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-345229-m03 Clientid:01:52:54:00:0b:5c:1c}
	I0127 02:00:52.083826 1078016 main.go:141] libmachine: (ha-345229-m03) DBG | domain ha-345229-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:0b:5c:1c in network mk-ha-345229
	I0127 02:00:52.083979 1078016 main.go:141] libmachine: (ha-345229-m03) Calling .GetSSHPort
	I0127 02:00:52.084182 1078016 main.go:141] libmachine: (ha-345229-m03) Calling .GetSSHKeyPath
	I0127 02:00:52.084369 1078016 main.go:141] libmachine: (ha-345229-m03) Calling .GetSSHUsername
	I0127 02:00:52.084540 1078016 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/ha-345229-m03/id_rsa Username:docker}
	I0127 02:00:52.168551 1078016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 02:00:52.185739 1078016 kubeconfig.go:125] found "ha-345229" server: "https://192.168.39.254:8443"
	I0127 02:00:52.185777 1078016 api_server.go:166] Checking apiserver status ...
	I0127 02:00:52.185823 1078016 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 02:00:52.201417 1078016 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1105/cgroup
	W0127 02:00:52.212235 1078016 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1105/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 02:00:52.212293 1078016 ssh_runner.go:195] Run: ls
	I0127 02:00:52.217178 1078016 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0127 02:00:52.222052 1078016 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0127 02:00:52.222080 1078016 status.go:463] ha-345229-m03 apiserver status = Running (err=<nil>)
	I0127 02:00:52.222092 1078016 status.go:176] ha-345229-m03 status: &{Name:ha-345229-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 02:00:52.222115 1078016 status.go:174] checking status of ha-345229-m04 ...
	I0127 02:00:52.222530 1078016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 02:00:52.222591 1078016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:00:52.239185 1078016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35113
	I0127 02:00:52.239643 1078016 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:00:52.240178 1078016 main.go:141] libmachine: Using API Version  1
	I0127 02:00:52.240208 1078016 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:00:52.240512 1078016 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:00:52.240729 1078016 main.go:141] libmachine: (ha-345229-m04) Calling .GetState
	I0127 02:00:52.242324 1078016 status.go:371] ha-345229-m04 host status = "Running" (err=<nil>)
	I0127 02:00:52.242342 1078016 host.go:66] Checking if "ha-345229-m04" exists ...
	I0127 02:00:52.242784 1078016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 02:00:52.242849 1078016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:00:52.257907 1078016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36349
	I0127 02:00:52.258319 1078016 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:00:52.258782 1078016 main.go:141] libmachine: Using API Version  1
	I0127 02:00:52.258803 1078016 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:00:52.259148 1078016 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:00:52.259377 1078016 main.go:141] libmachine: (ha-345229-m04) Calling .GetIP
	I0127 02:00:52.262295 1078016 main.go:141] libmachine: (ha-345229-m04) DBG | domain ha-345229-m04 has defined MAC address 52:54:00:5b:67:0f in network mk-ha-345229
	I0127 02:00:52.262742 1078016 main.go:141] libmachine: (ha-345229-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:67:0f", ip: ""} in network mk-ha-345229: {Iface:virbr1 ExpiryTime:2025-01-27 02:58:25 +0000 UTC Type:0 Mac:52:54:00:5b:67:0f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-345229-m04 Clientid:01:52:54:00:5b:67:0f}
	I0127 02:00:52.262777 1078016 main.go:141] libmachine: (ha-345229-m04) DBG | domain ha-345229-m04 has defined IP address 192.168.39.35 and MAC address 52:54:00:5b:67:0f in network mk-ha-345229
	I0127 02:00:52.262961 1078016 host.go:66] Checking if "ha-345229-m04" exists ...
	I0127 02:00:52.263280 1078016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 02:00:52.263318 1078016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:00:52.278815 1078016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35541
	I0127 02:00:52.279335 1078016 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:00:52.279876 1078016 main.go:141] libmachine: Using API Version  1
	I0127 02:00:52.279895 1078016 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:00:52.280281 1078016 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:00:52.280469 1078016 main.go:141] libmachine: (ha-345229-m04) Calling .DriverName
	I0127 02:00:52.280637 1078016 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 02:00:52.280655 1078016 main.go:141] libmachine: (ha-345229-m04) Calling .GetSSHHostname
	I0127 02:00:52.283428 1078016 main.go:141] libmachine: (ha-345229-m04) DBG | domain ha-345229-m04 has defined MAC address 52:54:00:5b:67:0f in network mk-ha-345229
	I0127 02:00:52.283929 1078016 main.go:141] libmachine: (ha-345229-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:67:0f", ip: ""} in network mk-ha-345229: {Iface:virbr1 ExpiryTime:2025-01-27 02:58:25 +0000 UTC Type:0 Mac:52:54:00:5b:67:0f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-345229-m04 Clientid:01:52:54:00:5b:67:0f}
	I0127 02:00:52.283949 1078016 main.go:141] libmachine: (ha-345229-m04) DBG | domain ha-345229-m04 has defined IP address 192.168.39.35 and MAC address 52:54:00:5b:67:0f in network mk-ha-345229
	I0127 02:00:52.284361 1078016 main.go:141] libmachine: (ha-345229-m04) Calling .GetSSHPort
	I0127 02:00:52.284544 1078016 main.go:141] libmachine: (ha-345229-m04) Calling .GetSSHKeyPath
	I0127 02:00:52.284688 1078016 main.go:141] libmachine: (ha-345229-m04) Calling .GetSSHUsername
	I0127 02:00:52.284825 1078016 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/ha-345229-m04/id_rsa Username:docker}
	I0127 02:00:52.369487 1078016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 02:00:52.386848 1078016 status.go:176] ha-345229-m04 status: &{Name:ha-345229-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (43.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-345229 node start m02 -v=7 --alsologtostderr: (42.860544869s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (43.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (492.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-345229 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-345229 -v=7 --alsologtostderr
E0127 02:01:44.816502 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/functional-249952/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:01:46.279638 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:04:00.955778 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/functional-249952/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:04:28.658409 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/functional-249952/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-345229 -v=7 --alsologtostderr: (4m33.977319255s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-345229 --wait=true -v=7 --alsologtostderr
E0127 02:06:46.280270 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:08:09.347916 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:09:00.956308 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/functional-249952/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-345229 --wait=true -v=7 --alsologtostderr: (3m38.373807457s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-345229
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (492.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (6.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-345229 node delete m03 -v=7 --alsologtostderr: (5.318545935s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (6.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 stop -v=7 --alsologtostderr
E0127 02:11:46.279673 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:14:00.956105 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/functional-249952/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-345229 stop -v=7 --alsologtostderr: (4m32.430091113s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-345229 status -v=7 --alsologtostderr: exit status 7 (117.14604ms)

                                                
                                                
-- stdout --
	ha-345229
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-345229-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-345229-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 02:14:29.484600 1082095 out.go:345] Setting OutFile to fd 1 ...
	I0127 02:14:29.484721 1082095 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:14:29.484731 1082095 out.go:358] Setting ErrFile to fd 2...
	I0127 02:14:29.484736 1082095 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:14:29.484925 1082095 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-1057178/.minikube/bin
	I0127 02:14:29.485148 1082095 out.go:352] Setting JSON to false
	I0127 02:14:29.485202 1082095 mustload.go:65] Loading cluster: ha-345229
	I0127 02:14:29.485240 1082095 notify.go:220] Checking for updates...
	I0127 02:14:29.485665 1082095 config.go:182] Loaded profile config "ha-345229": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 02:14:29.485690 1082095 status.go:174] checking status of ha-345229 ...
	I0127 02:14:29.486160 1082095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 02:14:29.486206 1082095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:14:29.505497 1082095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33257
	I0127 02:14:29.505955 1082095 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:14:29.506657 1082095 main.go:141] libmachine: Using API Version  1
	I0127 02:14:29.506680 1082095 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:14:29.507137 1082095 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:14:29.507396 1082095 main.go:141] libmachine: (ha-345229) Calling .GetState
	I0127 02:14:29.509188 1082095 status.go:371] ha-345229 host status = "Stopped" (err=<nil>)
	I0127 02:14:29.509206 1082095 status.go:384] host is not running, skipping remaining checks
	I0127 02:14:29.509214 1082095 status.go:176] ha-345229 status: &{Name:ha-345229 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 02:14:29.509280 1082095 status.go:174] checking status of ha-345229-m02 ...
	I0127 02:14:29.509600 1082095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 02:14:29.509649 1082095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:14:29.525029 1082095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32815
	I0127 02:14:29.525431 1082095 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:14:29.525864 1082095 main.go:141] libmachine: Using API Version  1
	I0127 02:14:29.525888 1082095 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:14:29.526219 1082095 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:14:29.526414 1082095 main.go:141] libmachine: (ha-345229-m02) Calling .GetState
	I0127 02:14:29.527963 1082095 status.go:371] ha-345229-m02 host status = "Stopped" (err=<nil>)
	I0127 02:14:29.527978 1082095 status.go:384] host is not running, skipping remaining checks
	I0127 02:14:29.527984 1082095 status.go:176] ha-345229-m02 status: &{Name:ha-345229-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 02:14:29.527999 1082095 status.go:174] checking status of ha-345229-m04 ...
	I0127 02:14:29.528300 1082095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 02:14:29.528339 1082095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:14:29.544151 1082095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46279
	I0127 02:14:29.544665 1082095 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:14:29.545209 1082095 main.go:141] libmachine: Using API Version  1
	I0127 02:14:29.545234 1082095 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:14:29.545610 1082095 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:14:29.545778 1082095 main.go:141] libmachine: (ha-345229-m04) Calling .GetState
	I0127 02:14:29.547364 1082095 status.go:371] ha-345229-m04 host status = "Stopped" (err=<nil>)
	I0127 02:14:29.547381 1082095 status.go:384] host is not running, skipping remaining checks
	I0127 02:14:29.547387 1082095 status.go:176] ha-345229-m04 status: &{Name:ha-345229-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (165.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-345229 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0127 02:15:24.020762 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/functional-249952/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:16:46.280309 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-345229 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m44.285366053s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (165.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-345229 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-345229 --control-plane -v=7 --alsologtostderr: (1m14.315268134s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-345229 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                    
x
+
TestJSONOutput/start/Command (86.67s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-250891 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
E0127 02:19:00.959417 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/functional-249952/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-250891 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m26.668785205s)
--- PASS: TestJSONOutput/start/Command (86.67s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-250891 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-250891 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.62s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-250891 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-250891 --output=json --user=testUser: (6.621032135s)
--- PASS: TestJSONOutput/stop/Command (6.62s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-609226 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-609226 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (67.423089ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"91911b4f-5555-45dd-8644-feb61208fcae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-609226] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f4a33bb2-cc2a-4102-9048-e9316c054c12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20316"}}
	{"specversion":"1.0","id":"76ec2c20-9e5a-4ce3-a6d1-85d298c442db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7d064c7a-f5f0-4a70-88cf-6d026c82d70a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20316-1057178/kubeconfig"}}
	{"specversion":"1.0","id":"759b641d-8425-4065-8715-329a787cf05b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-1057178/.minikube"}}
	{"specversion":"1.0","id":"45f382d2-e596-4a9d-bc78-2b64c6301f73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0202266c-5a80-45c1-8c53-fff8cd05d096","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4bbb3e05-bdf2-4730-949c-6ffb1d0b8e4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-609226" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-609226
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (92.83s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-722168 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-722168 --driver=kvm2  --container-runtime=containerd: (44.448170854s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-736977 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-736977 --driver=kvm2  --container-runtime=containerd: (45.246522074s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-722168
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-736977
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-736977" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-736977
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-736977: (1.017375108s)
helpers_test.go:175: Cleaning up "first-722168" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-722168
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-722168: (1.029902295s)
--- PASS: TestMinikubeProfile (92.83s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.89s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-919738 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0127 02:21:46.281201 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-919738 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (28.887959432s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-919738 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-919738 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.33s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-936079 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-936079 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.325583904s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-936079 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-936079 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-919738 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-936079 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-936079 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-936079
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-936079: (1.335668824s)
--- PASS: TestMountStart/serial/Stop (1.34s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.73s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-936079
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-936079: (21.734541555s)
--- PASS: TestMountStart/serial/RestartStopped (22.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-936079 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-936079 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (111.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-494920 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0127 02:24:00.955737 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/functional-249952/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:24:49.350003 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-494920 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m50.878336341s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (111.31s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-494920 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-494920 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-494920 -- rollout status deployment/busybox: (2.919740433s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-494920 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-494920 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-494920 -- exec busybox-58667487b6-lr4wk -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-494920 -- exec busybox-58667487b6-t46r5 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-494920 -- exec busybox-58667487b6-lr4wk -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-494920 -- exec busybox-58667487b6-t46r5 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-494920 -- exec busybox-58667487b6-lr4wk -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-494920 -- exec busybox-58667487b6-t46r5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.50s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-494920 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-494920 -- exec busybox-58667487b6-lr4wk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-494920 -- exec busybox-58667487b6-lr4wk -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-494920 -- exec busybox-58667487b6-t46r5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-494920 -- exec busybox-58667487b6-t46r5 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-494920 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-494920 -v 3 --alsologtostderr: (50.071850862s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.65s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-494920 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.58s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 cp testdata/cp-test.txt multinode-494920:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 ssh -n multinode-494920 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 cp multinode-494920:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1934097122/001/cp-test_multinode-494920.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 ssh -n multinode-494920 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 cp multinode-494920:/home/docker/cp-test.txt multinode-494920-m02:/home/docker/cp-test_multinode-494920_multinode-494920-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 ssh -n multinode-494920 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 ssh -n multinode-494920-m02 "sudo cat /home/docker/cp-test_multinode-494920_multinode-494920-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 cp multinode-494920:/home/docker/cp-test.txt multinode-494920-m03:/home/docker/cp-test_multinode-494920_multinode-494920-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 ssh -n multinode-494920 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 ssh -n multinode-494920-m03 "sudo cat /home/docker/cp-test_multinode-494920_multinode-494920-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 cp testdata/cp-test.txt multinode-494920-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 ssh -n multinode-494920-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 cp multinode-494920-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1934097122/001/cp-test_multinode-494920-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 ssh -n multinode-494920-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 cp multinode-494920-m02:/home/docker/cp-test.txt multinode-494920:/home/docker/cp-test_multinode-494920-m02_multinode-494920.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 ssh -n multinode-494920-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 ssh -n multinode-494920 "sudo cat /home/docker/cp-test_multinode-494920-m02_multinode-494920.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 cp multinode-494920-m02:/home/docker/cp-test.txt multinode-494920-m03:/home/docker/cp-test_multinode-494920-m02_multinode-494920-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 ssh -n multinode-494920-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 ssh -n multinode-494920-m03 "sudo cat /home/docker/cp-test_multinode-494920-m02_multinode-494920-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 cp testdata/cp-test.txt multinode-494920-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 ssh -n multinode-494920-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 cp multinode-494920-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1934097122/001/cp-test_multinode-494920-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 ssh -n multinode-494920-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 cp multinode-494920-m03:/home/docker/cp-test.txt multinode-494920:/home/docker/cp-test_multinode-494920-m03_multinode-494920.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 ssh -n multinode-494920-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 ssh -n multinode-494920 "sudo cat /home/docker/cp-test_multinode-494920-m03_multinode-494920.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 cp multinode-494920-m03:/home/docker/cp-test.txt multinode-494920-m02:/home/docker/cp-test_multinode-494920-m03_multinode-494920-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 ssh -n multinode-494920-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 ssh -n multinode-494920-m02 "sudo cat /home/docker/cp-test_multinode-494920-m03_multinode-494920-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.38s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-494920 node stop m03: (1.403964811s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-494920 status: exit status 7 (430.294797ms)

                                                
                                                
-- stdout --
	multinode-494920
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-494920-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-494920-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-494920 status --alsologtostderr: exit status 7 (438.117676ms)

                                                
                                                
-- stdout --
	multinode-494920
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-494920-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-494920-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 02:26:05.202716 1090356 out.go:345] Setting OutFile to fd 1 ...
	I0127 02:26:05.202865 1090356 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:26:05.202876 1090356 out.go:358] Setting ErrFile to fd 2...
	I0127 02:26:05.202883 1090356 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:26:05.203092 1090356 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-1057178/.minikube/bin
	I0127 02:26:05.203329 1090356 out.go:352] Setting JSON to false
	I0127 02:26:05.203370 1090356 mustload.go:65] Loading cluster: multinode-494920
	I0127 02:26:05.203512 1090356 notify.go:220] Checking for updates...
	I0127 02:26:05.203775 1090356 config.go:182] Loaded profile config "multinode-494920": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 02:26:05.203799 1090356 status.go:174] checking status of multinode-494920 ...
	I0127 02:26:05.204198 1090356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 02:26:05.204250 1090356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:26:05.225730 1090356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39879
	I0127 02:26:05.226168 1090356 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:26:05.226854 1090356 main.go:141] libmachine: Using API Version  1
	I0127 02:26:05.226887 1090356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:26:05.227276 1090356 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:26:05.227521 1090356 main.go:141] libmachine: (multinode-494920) Calling .GetState
	I0127 02:26:05.229218 1090356 status.go:371] multinode-494920 host status = "Running" (err=<nil>)
	I0127 02:26:05.229237 1090356 host.go:66] Checking if "multinode-494920" exists ...
	I0127 02:26:05.229572 1090356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 02:26:05.229619 1090356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:26:05.245493 1090356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33033
	I0127 02:26:05.245982 1090356 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:26:05.246514 1090356 main.go:141] libmachine: Using API Version  1
	I0127 02:26:05.246533 1090356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:26:05.246919 1090356 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:26:05.247182 1090356 main.go:141] libmachine: (multinode-494920) Calling .GetIP
	I0127 02:26:05.250367 1090356 main.go:141] libmachine: (multinode-494920) DBG | domain multinode-494920 has defined MAC address 52:54:00:01:da:64 in network mk-multinode-494920
	I0127 02:26:05.250769 1090356 main.go:141] libmachine: (multinode-494920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:da:64", ip: ""} in network mk-multinode-494920: {Iface:virbr1 ExpiryTime:2025-01-27 03:23:23 +0000 UTC Type:0 Mac:52:54:00:01:da:64 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:multinode-494920 Clientid:01:52:54:00:01:da:64}
	I0127 02:26:05.250794 1090356 main.go:141] libmachine: (multinode-494920) DBG | domain multinode-494920 has defined IP address 192.168.39.7 and MAC address 52:54:00:01:da:64 in network mk-multinode-494920
	I0127 02:26:05.250916 1090356 host.go:66] Checking if "multinode-494920" exists ...
	I0127 02:26:05.251295 1090356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 02:26:05.251354 1090356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:26:05.267958 1090356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44543
	I0127 02:26:05.268450 1090356 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:26:05.268965 1090356 main.go:141] libmachine: Using API Version  1
	I0127 02:26:05.268990 1090356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:26:05.269380 1090356 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:26:05.269623 1090356 main.go:141] libmachine: (multinode-494920) Calling .DriverName
	I0127 02:26:05.269847 1090356 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 02:26:05.269870 1090356 main.go:141] libmachine: (multinode-494920) Calling .GetSSHHostname
	I0127 02:26:05.272976 1090356 main.go:141] libmachine: (multinode-494920) DBG | domain multinode-494920 has defined MAC address 52:54:00:01:da:64 in network mk-multinode-494920
	I0127 02:26:05.273395 1090356 main.go:141] libmachine: (multinode-494920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:da:64", ip: ""} in network mk-multinode-494920: {Iface:virbr1 ExpiryTime:2025-01-27 03:23:23 +0000 UTC Type:0 Mac:52:54:00:01:da:64 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:multinode-494920 Clientid:01:52:54:00:01:da:64}
	I0127 02:26:05.273420 1090356 main.go:141] libmachine: (multinode-494920) DBG | domain multinode-494920 has defined IP address 192.168.39.7 and MAC address 52:54:00:01:da:64 in network mk-multinode-494920
	I0127 02:26:05.273630 1090356 main.go:141] libmachine: (multinode-494920) Calling .GetSSHPort
	I0127 02:26:05.273824 1090356 main.go:141] libmachine: (multinode-494920) Calling .GetSSHKeyPath
	I0127 02:26:05.273987 1090356 main.go:141] libmachine: (multinode-494920) Calling .GetSSHUsername
	I0127 02:26:05.274136 1090356 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/multinode-494920/id_rsa Username:docker}
	I0127 02:26:05.356744 1090356 ssh_runner.go:195] Run: systemctl --version
	I0127 02:26:05.364156 1090356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 02:26:05.379437 1090356 kubeconfig.go:125] found "multinode-494920" server: "https://192.168.39.7:8443"
	I0127 02:26:05.379471 1090356 api_server.go:166] Checking apiserver status ...
	I0127 02:26:05.379505 1090356 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 02:26:05.394050 1090356 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1114/cgroup
	W0127 02:26:05.404763 1090356 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1114/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 02:26:05.404834 1090356 ssh_runner.go:195] Run: ls
	I0127 02:26:05.409846 1090356 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0127 02:26:05.414553 1090356 api_server.go:279] https://192.168.39.7:8443/healthz returned 200:
	ok
	I0127 02:26:05.414583 1090356 status.go:463] multinode-494920 apiserver status = Running (err=<nil>)
	I0127 02:26:05.414598 1090356 status.go:176] multinode-494920 status: &{Name:multinode-494920 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 02:26:05.414637 1090356 status.go:174] checking status of multinode-494920-m02 ...
	I0127 02:26:05.414945 1090356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 02:26:05.414987 1090356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:26:05.430383 1090356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38295
	I0127 02:26:05.430862 1090356 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:26:05.431359 1090356 main.go:141] libmachine: Using API Version  1
	I0127 02:26:05.431382 1090356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:26:05.431702 1090356 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:26:05.431900 1090356 main.go:141] libmachine: (multinode-494920-m02) Calling .GetState
	I0127 02:26:05.433629 1090356 status.go:371] multinode-494920-m02 host status = "Running" (err=<nil>)
	I0127 02:26:05.433647 1090356 host.go:66] Checking if "multinode-494920-m02" exists ...
	I0127 02:26:05.433930 1090356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 02:26:05.433966 1090356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:26:05.449085 1090356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38669
	I0127 02:26:05.449653 1090356 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:26:05.450265 1090356 main.go:141] libmachine: Using API Version  1
	I0127 02:26:05.450303 1090356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:26:05.450625 1090356 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:26:05.450846 1090356 main.go:141] libmachine: (multinode-494920-m02) Calling .GetIP
	I0127 02:26:05.453822 1090356 main.go:141] libmachine: (multinode-494920-m02) DBG | domain multinode-494920-m02 has defined MAC address 52:54:00:dd:99:c7 in network mk-multinode-494920
	I0127 02:26:05.454275 1090356 main.go:141] libmachine: (multinode-494920-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:99:c7", ip: ""} in network mk-multinode-494920: {Iface:virbr1 ExpiryTime:2025-01-27 03:24:25 +0000 UTC Type:0 Mac:52:54:00:dd:99:c7 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-494920-m02 Clientid:01:52:54:00:dd:99:c7}
	I0127 02:26:05.454299 1090356 main.go:141] libmachine: (multinode-494920-m02) DBG | domain multinode-494920-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:dd:99:c7 in network mk-multinode-494920
	I0127 02:26:05.454477 1090356 host.go:66] Checking if "multinode-494920-m02" exists ...
	I0127 02:26:05.454795 1090356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 02:26:05.454837 1090356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:26:05.470311 1090356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35313
	I0127 02:26:05.470758 1090356 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:26:05.471261 1090356 main.go:141] libmachine: Using API Version  1
	I0127 02:26:05.471286 1090356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:26:05.471655 1090356 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:26:05.471863 1090356 main.go:141] libmachine: (multinode-494920-m02) Calling .DriverName
	I0127 02:26:05.472029 1090356 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 02:26:05.472051 1090356 main.go:141] libmachine: (multinode-494920-m02) Calling .GetSSHHostname
	I0127 02:26:05.474848 1090356 main.go:141] libmachine: (multinode-494920-m02) DBG | domain multinode-494920-m02 has defined MAC address 52:54:00:dd:99:c7 in network mk-multinode-494920
	I0127 02:26:05.475263 1090356 main.go:141] libmachine: (multinode-494920-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:99:c7", ip: ""} in network mk-multinode-494920: {Iface:virbr1 ExpiryTime:2025-01-27 03:24:25 +0000 UTC Type:0 Mac:52:54:00:dd:99:c7 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-494920-m02 Clientid:01:52:54:00:dd:99:c7}
	I0127 02:26:05.475291 1090356 main.go:141] libmachine: (multinode-494920-m02) DBG | domain multinode-494920-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:dd:99:c7 in network mk-multinode-494920
	I0127 02:26:05.475452 1090356 main.go:141] libmachine: (multinode-494920-m02) Calling .GetSSHPort
	I0127 02:26:05.475660 1090356 main.go:141] libmachine: (multinode-494920-m02) Calling .GetSSHKeyPath
	I0127 02:26:05.475856 1090356 main.go:141] libmachine: (multinode-494920-m02) Calling .GetSSHUsername
	I0127 02:26:05.476083 1090356 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/multinode-494920-m02/id_rsa Username:docker}
	I0127 02:26:05.553033 1090356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 02:26:05.569717 1090356 status.go:176] multinode-494920-m02 status: &{Name:multinode-494920-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0127 02:26:05.569758 1090356 status.go:174] checking status of multinode-494920-m03 ...
	I0127 02:26:05.570111 1090356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 02:26:05.570165 1090356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:26:05.585718 1090356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34493
	I0127 02:26:05.586170 1090356 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:26:05.586668 1090356 main.go:141] libmachine: Using API Version  1
	I0127 02:26:05.586687 1090356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:26:05.586990 1090356 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:26:05.587230 1090356 main.go:141] libmachine: (multinode-494920-m03) Calling .GetState
	I0127 02:26:05.588682 1090356 status.go:371] multinode-494920-m03 host status = "Stopped" (err=<nil>)
	I0127 02:26:05.588695 1090356 status.go:384] host is not running, skipping remaining checks
	I0127 02:26:05.588703 1090356 status.go:176] multinode-494920-m03 status: &{Name:multinode-494920-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (35.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-494920 node start m03 -v=7 --alsologtostderr: (35.035273283s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (35.68s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (317.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-494920
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-494920
E0127 02:26:46.281793 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:29:00.959563 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/functional-249952/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-494920: (3m3.251388236s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-494920 --wait=true -v=8 --alsologtostderr
E0127 02:31:46.279303 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-494920 --wait=true -v=8 --alsologtostderr: (2m14.526852157s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-494920
--- PASS: TestMultiNode/serial/RestartKeepsNodes (317.88s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-494920 node delete m03: (1.490800443s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.03s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (182.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 stop
E0127 02:32:04.024765 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/functional-249952/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:34:00.959310 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/functional-249952/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-494920 stop: (3m1.905128451s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-494920 status: exit status 7 (92.777325ms)

                                                
                                                
-- stdout --
	multinode-494920
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-494920-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-494920 status --alsologtostderr: exit status 7 (90.957695ms)

                                                
                                                
-- stdout --
	multinode-494920
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-494920-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 02:35:03.231371 1093100 out.go:345] Setting OutFile to fd 1 ...
	I0127 02:35:03.231489 1093100 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:35:03.231499 1093100 out.go:358] Setting ErrFile to fd 2...
	I0127 02:35:03.231503 1093100 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:35:03.231678 1093100 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-1057178/.minikube/bin
	I0127 02:35:03.231834 1093100 out.go:352] Setting JSON to false
	I0127 02:35:03.231864 1093100 mustload.go:65] Loading cluster: multinode-494920
	I0127 02:35:03.231892 1093100 notify.go:220] Checking for updates...
	I0127 02:35:03.232246 1093100 config.go:182] Loaded profile config "multinode-494920": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 02:35:03.232267 1093100 status.go:174] checking status of multinode-494920 ...
	I0127 02:35:03.232692 1093100 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 02:35:03.232731 1093100 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:35:03.252852 1093100 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39927
	I0127 02:35:03.253405 1093100 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:35:03.254033 1093100 main.go:141] libmachine: Using API Version  1
	I0127 02:35:03.254066 1093100 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:35:03.254440 1093100 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:35:03.254654 1093100 main.go:141] libmachine: (multinode-494920) Calling .GetState
	I0127 02:35:03.256237 1093100 status.go:371] multinode-494920 host status = "Stopped" (err=<nil>)
	I0127 02:35:03.256264 1093100 status.go:384] host is not running, skipping remaining checks
	I0127 02:35:03.256272 1093100 status.go:176] multinode-494920 status: &{Name:multinode-494920 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 02:35:03.256327 1093100 status.go:174] checking status of multinode-494920-m02 ...
	I0127 02:35:03.256630 1093100 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 02:35:03.256676 1093100 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:35:03.271454 1093100 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46755
	I0127 02:35:03.271911 1093100 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:35:03.272393 1093100 main.go:141] libmachine: Using API Version  1
	I0127 02:35:03.272420 1093100 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:35:03.272705 1093100 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:35:03.272870 1093100 main.go:141] libmachine: (multinode-494920-m02) Calling .GetState
	I0127 02:35:03.274138 1093100 status.go:371] multinode-494920-m02 host status = "Stopped" (err=<nil>)
	I0127 02:35:03.274163 1093100 status.go:384] host is not running, skipping remaining checks
	I0127 02:35:03.274171 1093100 status.go:176] multinode-494920-m02 status: &{Name:multinode-494920-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (182.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (96.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-494920 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-494920 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m35.581382629s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-494920 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (96.12s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-494920
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-494920-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-494920-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (67.283857ms)

                                                
                                                
-- stdout --
	* [multinode-494920-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20316
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20316-1057178/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-1057178/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-494920-m02' is duplicated with machine name 'multinode-494920-m02' in profile 'multinode-494920'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-494920-m03 --driver=kvm2  --container-runtime=containerd
E0127 02:36:46.282338 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-494920-m03 --driver=kvm2  --container-runtime=containerd: (46.077405179s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-494920
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-494920: exit status 80 (220.403922ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-494920 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-494920-m03 already exists in multinode-494920-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-494920-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-494920-m03: (1.040780589s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.46s)

                                                
                                    
x
+
TestPreload (260.24s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-786767 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0127 02:39:00.955549 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/functional-249952/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-786767 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m58.754101551s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-786767 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-786767 image pull gcr.io/k8s-minikube/busybox: (1.562756467s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-786767
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-786767: (1m30.972318283s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-786767 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
E0127 02:41:29.352232 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:41:46.279459 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-786767 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (47.662209651s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-786767 image list
helpers_test.go:175: Cleaning up "test-preload-786767" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-786767
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-786767: (1.060615558s)
--- PASS: TestPreload (260.24s)

                                                
                                    
x
+
TestScheduledStopUnix (120.08s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-885099 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-885099 --memory=2048 --driver=kvm2  --container-runtime=containerd: (48.408120234s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-885099 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-885099 -n scheduled-stop-885099
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-885099 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0127 02:42:37.264378 1064439 retry.go:31] will retry after 133.653µs: open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/scheduled-stop-885099/pid: no such file or directory
I0127 02:42:37.265545 1064439 retry.go:31] will retry after 102.55µs: open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/scheduled-stop-885099/pid: no such file or directory
I0127 02:42:37.266727 1064439 retry.go:31] will retry after 167.064µs: open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/scheduled-stop-885099/pid: no such file or directory
I0127 02:42:37.267889 1064439 retry.go:31] will retry after 360.543µs: open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/scheduled-stop-885099/pid: no such file or directory
I0127 02:42:37.269018 1064439 retry.go:31] will retry after 747.56µs: open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/scheduled-stop-885099/pid: no such file or directory
I0127 02:42:37.270191 1064439 retry.go:31] will retry after 634.372µs: open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/scheduled-stop-885099/pid: no such file or directory
I0127 02:42:37.271340 1064439 retry.go:31] will retry after 1.123067ms: open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/scheduled-stop-885099/pid: no such file or directory
I0127 02:42:37.273555 1064439 retry.go:31] will retry after 940.62µs: open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/scheduled-stop-885099/pid: no such file or directory
I0127 02:42:37.274690 1064439 retry.go:31] will retry after 1.552917ms: open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/scheduled-stop-885099/pid: no such file or directory
I0127 02:42:37.276886 1064439 retry.go:31] will retry after 3.392143ms: open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/scheduled-stop-885099/pid: no such file or directory
I0127 02:42:37.281101 1064439 retry.go:31] will retry after 5.803449ms: open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/scheduled-stop-885099/pid: no such file or directory
I0127 02:42:37.287376 1064439 retry.go:31] will retry after 10.169759ms: open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/scheduled-stop-885099/pid: no such file or directory
I0127 02:42:37.298639 1064439 retry.go:31] will retry after 7.572964ms: open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/scheduled-stop-885099/pid: no such file or directory
I0127 02:42:37.306904 1064439 retry.go:31] will retry after 26.735751ms: open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/scheduled-stop-885099/pid: no such file or directory
I0127 02:42:37.334156 1064439 retry.go:31] will retry after 29.690409ms: open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/scheduled-stop-885099/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-885099 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-885099 -n scheduled-stop-885099
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-885099
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-885099 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-885099
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-885099: exit status 7 (70.143524ms)

                                                
                                                
-- stdout --
	scheduled-stop-885099
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-885099 -n scheduled-stop-885099
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-885099 -n scheduled-stop-885099: exit status 7 (76.1532ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-885099" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-885099
--- PASS: TestScheduledStopUnix (120.08s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (151.74s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2037485099 start -p running-upgrade-382666 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2037485099 start -p running-upgrade-382666 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m4.816662701s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-382666 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-382666 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m25.013363285s)
helpers_test.go:175: Cleaning up "running-upgrade-382666" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-382666
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-382666: (1.296139054s)
--- PASS: TestRunningBinaryUpgrade (151.74s)

                                                
                                    
x
+
TestKubernetesUpgrade (162.6s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-465653 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-465653 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m7.9565367s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-465653
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-465653: (2.336835879s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-465653 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-465653 status --format={{.Host}}: exit status 7 (108.714124ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-465653 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-465653 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (54.907442017s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-465653 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-465653 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-465653 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (100.010581ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-465653] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20316
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20316-1057178/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-1057178/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-465653
	    minikube start -p kubernetes-upgrade-465653 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4656532 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-465653 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-465653 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-465653 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (36.033061108s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-465653" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-465653
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-465653: (1.09835125s)
--- PASS: TestKubernetesUpgrade (162.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-023165 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-023165 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (93.085093ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-023165] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20316
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20316-1057178/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-1057178/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (96.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-023165 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-023165 --driver=kvm2  --container-runtime=containerd: (1m36.726581735s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-023165 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (96.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-541715 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-541715 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (107.325557ms)

                                                
                                                
-- stdout --
	* [false-541715] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20316
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20316-1057178/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-1057178/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 02:43:51.788374 1097620 out.go:345] Setting OutFile to fd 1 ...
	I0127 02:43:51.788520 1097620 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:43:51.788534 1097620 out.go:358] Setting ErrFile to fd 2...
	I0127 02:43:51.788541 1097620 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:43:51.788758 1097620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-1057178/.minikube/bin
	I0127 02:43:51.789374 1097620 out.go:352] Setting JSON to false
	I0127 02:43:51.790424 1097620 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":12379,"bootTime":1737933453,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 02:43:51.790571 1097620 start.go:139] virtualization: kvm guest
	I0127 02:43:51.792613 1097620 out.go:177] * [false-541715] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 02:43:51.793856 1097620 out.go:177]   - MINIKUBE_LOCATION=20316
	I0127 02:43:51.793863 1097620 notify.go:220] Checking for updates...
	I0127 02:43:51.796381 1097620 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 02:43:51.797462 1097620 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20316-1057178/kubeconfig
	I0127 02:43:51.798480 1097620 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-1057178/.minikube
	I0127 02:43:51.799543 1097620 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 02:43:51.800653 1097620 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 02:43:51.802475 1097620 config.go:182] Loaded profile config "NoKubernetes-023165": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 02:43:51.802626 1097620 config.go:182] Loaded profile config "force-systemd-env-064299": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 02:43:51.802759 1097620 config.go:182] Loaded profile config "offline-containerd-004386": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 02:43:51.802888 1097620 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 02:43:51.839704 1097620 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 02:43:51.840888 1097620 start.go:297] selected driver: kvm2
	I0127 02:43:51.840904 1097620 start.go:901] validating driver "kvm2" against <nil>
	I0127 02:43:51.840920 1097620 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 02:43:51.842988 1097620 out.go:201] 
	W0127 02:43:51.844125 1097620 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0127 02:43:51.845279 1097620 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-541715 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-541715

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-541715

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-541715

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-541715

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-541715

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-541715

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-541715

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-541715

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-541715

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-541715

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541715"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541715"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541715"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-541715

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541715"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541715"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-541715" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-541715" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-541715" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-541715" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-541715" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-541715" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-541715" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-541715" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541715"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541715"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541715"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541715"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541715"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-541715" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-541715" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-541715" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541715"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541715"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541715"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541715"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541715"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-541715

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541715"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541715"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541715"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541715"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541715"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541715"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541715"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541715"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541715"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541715"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541715"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541715"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541715"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541715"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541715"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541715"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541715"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541715"

                                                
                                                
----------------------- debugLogs end: false-541715 [took: 2.950304185s] --------------------------------
helpers_test.go:175: Cleaning up "false-541715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-541715
--- PASS: TestNetworkPlugins/group/false (3.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (78.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-023165 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-023165 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (1m16.976863936s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-023165 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-023165 status -o json: exit status 2 (608.230085ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-023165","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-023165
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-023165: (1.059161233s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (78.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (57.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-023165 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0127 02:46:46.280218 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-023165 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (57.772784928s)
--- PASS: TestNoKubernetes/serial/Start (57.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-023165 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-023165 "sudo systemctl is-active --quiet service kubelet": exit status 1 (227.518296ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.030527886s)
--- PASS: TestNoKubernetes/serial/ProfileList (1.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-023165
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-023165: (2.321959958s)
--- PASS: TestNoKubernetes/serial/Stop (2.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (23.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-023165 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-023165 --driver=kvm2  --container-runtime=containerd: (23.871788884s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (23.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-023165 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-023165 "sudo systemctl is-active --quiet service kubelet": exit status 1 (219.790083ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (167.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.341503777 start -p stopped-upgrade-850372 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E0127 02:48:44.026269 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/functional-249952/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.341503777 start -p stopped-upgrade-850372 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m12.920461007s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.341503777 -p stopped-upgrade-850372 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.341503777 -p stopped-upgrade-850372 stop: (2.180815691s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-850372 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-850372 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m31.956380753s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (167.06s)

                                                
                                    
x
+
TestPause/serial/Start (65.27s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-920510 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-920510 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m5.272465581s)
--- PASS: TestPause/serial/Start (65.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (86.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-541715 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-541715 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m26.787973143s)
--- PASS: TestNetworkPlugins/group/auto/Start (86.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (100.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-541715 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-541715 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m40.222515116s)
--- PASS: TestNetworkPlugins/group/calico/Start (100.22s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (85.88s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-920510 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-920510 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m25.846232914s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (85.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-541715 "pgrep -a kubelet"
I0127 02:50:52.908064 1064439 config.go:182] Loaded profile config "auto-541715": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-541715 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-k6znr" [7fd7e15d-6978-4cb3-b343-dca2b628994b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-k6znr" [7fd7e15d-6978-4cb3-b343-dca2b628994b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.005701398s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.35s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-850372
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-850372: (1.345173314s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (69.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-541715 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-541715 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m9.403735218s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (69.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-541715 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-541715 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-541715 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (72.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-541715 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-541715 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m12.086136241s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (72.09s)

                                                
                                    
x
+
TestPause/serial/Pause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-920510 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.72s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-920510 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-920510 --output=json --layout=cluster: exit status 2 (266.399209ms)

                                                
                                                
-- stdout --
	{"Name":"pause-920510","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-920510","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.27s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-920510 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.71s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.79s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-920510 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.79s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.01s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-920510 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-920510 --alsologtostderr -v=5: (1.009115429s)
--- PASS: TestPause/serial/DeletePaused (1.01s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.46s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (105.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-541715 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-541715 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m45.838615602s)
--- PASS: TestNetworkPlugins/group/flannel/Start (105.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-tzdb5" [0c30adc6-1c08-4eec-99b7-189d3e3e02bc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005124843s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-541715 "pgrep -a kubelet"
I0127 02:51:38.997416 1064439 config.go:182] Loaded profile config "calico-541715": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-541715 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-k5k4l" [5a0e563d-bc62-49d4-8369-669f469215d2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-k5k4l" [5a0e563d-bc62-49d4-8369-669f469215d2] Running
E0127 02:51:46.279258 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004616983s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-541715 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-541715 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-541715 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (99.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-541715 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-541715 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m39.562168523s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (99.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-541715 "pgrep -a kubelet"
I0127 02:52:11.422248 1064439 config.go:182] Loaded profile config "custom-flannel-541715": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-541715 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-k2662" [dc8dd558-21e2-45b8-a7d3-630c57c7d7a9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-k2662" [dc8dd558-21e2-45b8-a7d3-630c57c7d7a9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004490785s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-541715 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-541715 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-541715 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-qxck6" [ddd70c69-1d84-4159-8823-6c137eb26438] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004571248s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-541715 "pgrep -a kubelet"
I0127 02:52:38.443739 1064439 config.go:182] Loaded profile config "kindnet-541715": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-541715 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-w24hc" [83b5ea16-bfd3-4fd3-a6cc-2ee2618264cb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-w24hc" [83b5ea16-bfd3-4fd3-a6cc-2ee2618264cb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004549641s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (72.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-541715 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-541715 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m12.514326787s)
--- PASS: TestNetworkPlugins/group/bridge/Start (72.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-541715 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-541715 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-541715 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (167.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-760492 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-760492 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m47.284730039s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (167.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-pzn8w" [6acc3be6-96e5-4873-be0b-73f72c2802ff] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004565465s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-541715 "pgrep -a kubelet"
I0127 02:53:16.347352 1064439 config.go:182] Loaded profile config "flannel-541715": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-541715 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-ztbcq" [48b11fcd-d202-4c6c-b40b-e9f4b8b51761] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-ztbcq" [48b11fcd-d202-4c6c-b40b-e9f4b8b51761] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003680886s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-541715 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-541715 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-541715 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (112.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-887091 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-887091 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (1m52.494177312s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (112.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-541715 "pgrep -a kubelet"
I0127 02:53:49.387981 1064439 config.go:182] Loaded profile config "enable-default-cni-541715": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-541715 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-4dkbm" [059ecca3-68b0-4b2a-888d-a0e663527836] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-4dkbm" [059ecca3-68b0-4b2a-888d-a0e663527836] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.004771734s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-541715 "pgrep -a kubelet"
I0127 02:53:51.339667 1064439 config.go:182] Loaded profile config "bridge-541715": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-541715 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context bridge-541715 replace --force -f testdata/netcat-deployment.yaml: (1.543900143s)
I0127 02:53:53.315829 1064439 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I0127 02:53:53.420061 1064439 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-d6h7z" [413e2119-5e45-4597-91c4-ae5b971f7dcb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-d6h7z" [413e2119-5e45-4597-91c4-ae5b971f7dcb] Running
E0127 02:54:00.956044 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/functional-249952/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003225531s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-541715 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-541715 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-541715 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-541715 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-541715 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-541715 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)
E0127 03:02:59.918638 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/kindnet-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:03:10.123695 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:03:37.825401 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:03:49.654066 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/enable-default-cni-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:03:52.885568 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/bridge-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:04:00.955718 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/functional-249952/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:04:17.356682 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/enable-default-cni-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:04:20.588342 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/bridge-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:05:24.028388 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/functional-249952/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:05:53.153826 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/auto-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:05:53.555618 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/old-k8s-version-760492/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:05:53.562050 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/old-k8s-version-760492/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:05:53.573421 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/old-k8s-version-760492/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:05:53.594963 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/old-k8s-version-760492/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:05:53.636414 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/old-k8s-version-760492/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:05:53.717867 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/old-k8s-version-760492/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:05:53.879449 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/old-k8s-version-760492/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:05:54.201158 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/old-k8s-version-760492/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:05:54.843202 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/old-k8s-version-760492/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:05:56.124599 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/old-k8s-version-760492/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:05:58.685993 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/old-k8s-version-760492/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:06:03.808098 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/old-k8s-version-760492/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:06:14.050251 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/old-k8s-version-760492/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:06:32.748605 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/calico-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:06:34.531996 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/old-k8s-version-760492/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:06:46.281150 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:07:11.643007 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/custom-flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:07:15.493406 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/old-k8s-version-760492/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:07:32.216748 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/kindnet-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:08:10.122859 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:08:37.414849 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/old-k8s-version-760492/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:08:49.654245 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/enable-default-cni-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:08:52.886241 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/bridge-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:09:00.955422 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/functional-249952/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:10:53.153252 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/auto-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:10:53.555699 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/old-k8s-version-760492/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:11:21.256240 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/old-k8s-version-760492/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:11:32.748776 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/calico-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:11:46.280970 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:12:11.643653 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/custom-flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:12:16.217982 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/auto-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:12:32.215981 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/kindnet-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:12:55.813104 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/calico-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:13:10.122831 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:13:34.705977 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/custom-flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:13:49.654190 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/enable-default-cni-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:13:52.886334 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/bridge-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:13:55.281123 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/kindnet-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:14:00.955316 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/functional-249952/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:14:33.187190 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:14:49.355224 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:15:12.718347 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/enable-default-cni-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:15:15.951649 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/bridge-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:15:53.153771 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/auto-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:15:53.555734 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/old-k8s-version-760492/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:16:32.748969 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/calico-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:16:46.281127 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:17:11.643115 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/custom-flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:17:32.216794 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/kindnet-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:18:10.122822 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:18:49.654410 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/enable-default-cni-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:18:52.886177 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/bridge-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:19:00.955266 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/functional-249952/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:20:53.153327 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/auto-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:20:53.556231 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/old-k8s-version-760492/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:21:32.748214 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/calico-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:21:46.279832 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:22:04.030702 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/functional-249952/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:22:11.643716 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/custom-flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:22:16.617637 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/old-k8s-version-760492/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:22:32.216070 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/kindnet-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:23:10.123412 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/flannel-541715/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (86.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-264552 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-264552 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (1m26.831462691s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (86.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-717075 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-717075 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (1m25.733581076s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-887091 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f337ecce-380e-47af-8266-eed74292d545] Pending
helpers_test.go:344: "busybox" [f337ecce-380e-47af-8266-eed74292d545] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f337ecce-380e-47af-8266-eed74292d545] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.005239502s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-887091 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-887091 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-887091 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.083319738s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-887091 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (90.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-887091 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-887091 --alsologtostderr -v=3: (1m30.853077615s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (90.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-264552 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4afa4a50-5ec4-4043-bd63-8112dd548b11] Pending
helpers_test.go:344: "busybox" [4afa4a50-5ec4-4043-bd63-8112dd548b11] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4afa4a50-5ec4-4043-bd63-8112dd548b11] Running
E0127 02:55:53.153478 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/auto-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:55:53.159795 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/auto-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:55:53.171539 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/auto-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:55:53.193025 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/auto-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:55:53.234471 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/auto-541715/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004606063s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-264552 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-717075 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0d2aed8a-c138-431d-b781-c7553d93417a] Pending
helpers_test.go:344: "busybox" [0d2aed8a-c138-431d-b781-c7553d93417a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0d2aed8a-c138-431d-b781-c7553d93417a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.00458083s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-717075 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-760492 create -f testdata/busybox.yaml
E0127 02:55:53.316272 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/auto-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:55:53.478157 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/auto-541715/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [59406047-5f93-496f-8a8e-e87229a25e41] Pending
E0127 02:55:53.799910 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/auto-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:55:54.441786 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/auto-541715/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [59406047-5f93-496f-8a8e-e87229a25e41] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0127 02:55:55.723511 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/auto-541715/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [59406047-5f93-496f-8a8e-e87229a25e41] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003657015s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-760492 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-264552 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-264552 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.0179945s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-264552 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-717075 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-717075 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.000888601s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-717075 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-264552 --alsologtostderr -v=3
E0127 02:55:58.285681 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/auto-541715/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-264552 --alsologtostderr -v=3: (1m31.767400754s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-717075 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-717075 --alsologtostderr -v=3: (1m31.098534293s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-760492 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0127 02:56:03.407559 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/auto-541715/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-760492 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (91.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-760492 --alsologtostderr -v=3
E0127 02:56:13.649539 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/auto-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:56:32.749196 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/calico-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:56:32.755701 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/calico-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:56:32.767111 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/calico-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:56:32.788561 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/calico-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:56:32.830036 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/calico-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:56:32.911655 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/calico-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:56:33.073487 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/calico-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:56:33.395454 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/calico-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:56:34.037425 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/calico-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:56:34.131500 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/auto-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:56:35.319268 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/calico-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:56:37.880732 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/calico-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:56:43.002695 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/calico-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:56:46.279420 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:56:53.244264 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/calico-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:57:11.642742 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/custom-flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:57:11.649150 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/custom-flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:57:11.660512 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/custom-flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:57:11.681872 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/custom-flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:57:11.723437 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/custom-flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:57:11.804991 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/custom-flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:57:11.966625 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/custom-flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:57:12.288314 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/custom-flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:57:12.930284 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/custom-flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:57:13.725607 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/calico-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:57:14.212446 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/custom-flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:57:15.093633 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/auto-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:57:16.774748 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/custom-flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-760492 --alsologtostderr -v=3: (1m31.072561131s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (91.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-887091 -n no-preload-887091
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-887091 -n no-preload-887091: exit status 7 (76.732205ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-887091 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-717075 -n default-k8s-diff-port-717075
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-717075 -n default-k8s-diff-port-717075: exit status 7 (84.449264ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-717075 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-264552 -n embed-certs-264552
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-264552 -n embed-certs-264552: exit status 7 (81.306928ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-264552 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-760492 -n old-k8s-version-760492
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-760492 -n old-k8s-version-760492: exit status 7 (66.645962ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-760492 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0127 02:57:34.786092 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/kindnet-541715/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (194.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-760492 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
E0127 02:57:37.348390 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/kindnet-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:57:42.469989 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/kindnet-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:57:52.619157 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/custom-flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:57:52.711744 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/kindnet-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:57:54.687421 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/calico-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:09.353578 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:10.123379 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:10.129790 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:10.141188 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:10.162664 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:10.204090 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:10.285606 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:10.447076 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:10.768793 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:11.410462 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:12.692105 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:13.193484 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/kindnet-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:15.253461 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:20.375854 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:30.617720 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:33.580899 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/custom-flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:37.015180 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/auto-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:49.654004 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/enable-default-cni-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:49.660732 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/enable-default-cni-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:49.672024 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/enable-default-cni-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:49.693523 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/enable-default-cni-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:49.734849 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/enable-default-cni-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:49.817089 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/enable-default-cni-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:49.979166 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/enable-default-cni-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:50.300889 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/enable-default-cni-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:50.942525 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/enable-default-cni-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:51.099893 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:52.224068 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/enable-default-cni-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:52.886108 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/bridge-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:52.892489 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/bridge-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:52.903922 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/bridge-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:52.925365 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/bridge-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:52.966850 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/bridge-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:53.048325 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/bridge-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:53.210040 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/bridge-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:53.531750 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/bridge-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:54.155795 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/kindnet-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:54.173029 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/bridge-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:54.786145 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/enable-default-cni-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:55.454976 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/bridge-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:58.016277 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/bridge-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:58:59.907539 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/enable-default-cni-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:59:00.955439 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/functional-249952/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:59:03.138689 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/bridge-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:59:10.149745 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/enable-default-cni-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:59:13.381038 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/bridge-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:59:16.608803 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/calico-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:59:30.631104 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/enable-default-cni-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:59:32.062058 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:59:33.862385 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/bridge-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:59:55.502935 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/custom-flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:00:11.593110 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/enable-default-cni-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:00:14.824643 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/bridge-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:00:16.077358 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/kindnet-541715/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-760492 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (3m14.581397843s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-760492 -n old-k8s-version-760492
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (194.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-tms6s" [c092fb7a-e3bc-4bbe-b22e-33cad9b836b4] Running
E0127 03:00:53.153149 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/auto-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:00:53.984010 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004542189s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-tms6s" [c092fb7a-e3bc-4bbe-b22e-33cad9b836b4] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005685173s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-760492 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-760492 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-760492 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-760492 -n old-k8s-version-760492
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-760492 -n old-k8s-version-760492: exit status 2 (253.278372ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-760492 -n old-k8s-version-760492
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-760492 -n old-k8s-version-760492: exit status 2 (261.268874ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-760492 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-760492 -n old-k8s-version-760492
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-760492 -n old-k8s-version-760492
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (51.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-642127 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 03:01:20.856609 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/auto-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:01:32.748403 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/calico-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:01:33.515069 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/enable-default-cni-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:01:36.746188 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/bridge-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:01:46.279978 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/addons-994590/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-642127 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (51.447671339s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (51.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-642127 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-642127 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.018464078s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-642127 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-642127 --alsologtostderr -v=3: (2.325630367s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-642127 -n newest-cni-642127
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-642127 -n newest-cni-642127: exit status 7 (77.308607ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-642127 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (35.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-642127 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 03:02:00.450530 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/calico-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:02:11.643663 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/custom-flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:02:32.216469 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/kindnet-541715/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-642127 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (35.637179319s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-642127 -n newest-cni-642127
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (35.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-642127 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-642127 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-642127 -n newest-cni-642127
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-642127 -n newest-cni-642127: exit status 2 (278.930475ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-642127 -n newest-cni-642127
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-642127 -n newest-cni-642127: exit status 2 (284.025828ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-642127 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-642127 -n newest-cni-642127
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-642127 -n newest-cni-642127
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.89s)

                                                
                                    

Test skip (38/316)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.1/cached-images 0
15 TestDownloadOnly/v1.32.1/binaries 0
16 TestDownloadOnly/v1.32.1/kubectl 0
20 TestDownloadOnlyKic 0
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
119 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
122 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestGvisorAddon 0
178 TestImageBuild 0
205 TestKicCustomNetwork 0
206 TestKicExistingNetwork 0
207 TestKicCustomSubnet 0
208 TestKicStaticIP 0
240 TestChangeNoneUser 0
243 TestScheduledStopWindows 0
245 TestSkaffold 0
247 TestInsufficientStorage 0
251 TestMissingContainerUpgrade 0
256 TestNetworkPlugins/group/kubenet 3.18
265 TestNetworkPlugins/group/cilium 3.37
280 TestStartStop/group/disable-driver-mounts 0.19
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-541715 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-541715

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-541715

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-541715

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-541715

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-541715

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-541715

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-541715

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-541715

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-541715

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-541715

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541715"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541715"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541715"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-541715

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541715"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541715"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-541715" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-541715" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-541715" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-541715" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-541715" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-541715" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-541715" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-541715" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541715"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541715"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541715"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541715"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541715"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-541715" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-541715" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-541715" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541715"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541715"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541715"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541715"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541715"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-541715

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541715"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541715"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541715"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541715"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541715"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541715"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541715"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541715"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541715"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541715"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541715"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541715"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541715"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541715"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541715"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541715"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541715"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541715"

                                                
                                                
----------------------- debugLogs end: kubenet-541715 [took: 3.023425691s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-541715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-541715
--- SKIP: TestNetworkPlugins/group/kubenet (3.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-541715 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-541715

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-541715

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-541715

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-541715

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-541715

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-541715

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-541715

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-541715

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-541715

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-541715

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541715"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541715"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541715"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-541715

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541715"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541715"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-541715" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-541715" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-541715" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-541715" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-541715" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-541715" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-541715" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-541715" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541715"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541715"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541715"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541715"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541715"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-541715

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-541715

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-541715" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-541715" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-541715

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-541715

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-541715" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-541715" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-541715" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-541715" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-541715" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541715"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541715"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541715"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541715"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541715"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-541715

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541715"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541715"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541715"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541715"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541715"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541715"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541715"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541715"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541715"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541715"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541715"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541715"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541715"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541715"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541715"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541715"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541715"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-541715" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541715"

                                                
                                                
----------------------- debugLogs end: cilium-541715 [took: 3.224061368s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-541715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-541715
--- SKIP: TestNetworkPlugins/group/cilium (3.37s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-486694" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-486694
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
Copied to clipboard