Test Report: KVM_Linux_containerd 20317

                    
                      bb508b30435b2a744d00b2f75d06f98d338973f1:2025-01-27:38093
                    
                

Test fail (3/316)

Order failed test Duration
358 TestStartStop/group/no-preload/serial/SecondStart 1623.09
360 TestStartStop/group/embed-certs/serial/SecondStart 1611.78
362 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 1642.1
x
+
TestStartStop/group/no-preload/serial/SecondStart (1623.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-325431 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 13:23:39.253012  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:23:41.820414  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/kindnet-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:23:48.071421  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/calico-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:23:48.077802  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/calico-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:23:48.089251  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/calico-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:23:48.110738  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/calico-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:23:48.152202  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/calico-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:23:48.233788  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/calico-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:23:48.395389  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/calico-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:23:48.716697  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/calico-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:23:49.358533  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/calico-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:23:50.640652  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/calico-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:23:53.202827  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/calico-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:23:58.324550  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/calico-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:24:08.566619  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/calico-744060/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-325431 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: signal: killed (27m0.722930283s)

                                                
                                                
-- stdout --
	* [no-preload-325431] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20317
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20317-466901/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-466901/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "no-preload-325431" primary control-plane node in "no-preload-325431" cluster
	* Restarting existing kvm2 VM for "no-preload-325431" ...
	* Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-325431 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 13:23:32.645876  528954 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:23:32.645988  528954 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:23:32.645996  528954 out.go:358] Setting ErrFile to fd 2...
	I0127 13:23:32.646000  528954 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:23:32.646190  528954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-466901/.minikube/bin
	I0127 13:23:32.646741  528954 out.go:352] Setting JSON to false
	I0127 13:23:32.647782  528954 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":36310,"bootTime":1737947903,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 13:23:32.647910  528954 start.go:139] virtualization: kvm guest
	I0127 13:23:32.649979  528954 out.go:177] * [no-preload-325431] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 13:23:32.651448  528954 notify.go:220] Checking for updates...
	I0127 13:23:32.651473  528954 out.go:177]   - MINIKUBE_LOCATION=20317
	I0127 13:23:32.652842  528954 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:23:32.654268  528954 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20317-466901/kubeconfig
	I0127 13:23:32.655537  528954 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-466901/.minikube
	I0127 13:23:32.656759  528954 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 13:23:32.658425  528954 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 13:23:32.659954  528954 config.go:182] Loaded profile config "no-preload-325431": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 13:23:32.660327  528954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:23:32.660378  528954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:23:32.675724  528954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37269
	I0127 13:23:32.676252  528954 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:23:32.676865  528954 main.go:141] libmachine: Using API Version  1
	I0127 13:23:32.676893  528954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:23:32.677259  528954 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:23:32.677474  528954 main.go:141] libmachine: (no-preload-325431) Calling .DriverName
	I0127 13:23:32.677782  528954 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 13:23:32.678237  528954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:23:32.678291  528954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:23:32.693444  528954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42475
	I0127 13:23:32.693854  528954 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:23:32.694326  528954 main.go:141] libmachine: Using API Version  1
	I0127 13:23:32.694352  528954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:23:32.694639  528954 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:23:32.694840  528954 main.go:141] libmachine: (no-preload-325431) Calling .DriverName
	I0127 13:23:32.732796  528954 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 13:23:32.733939  528954 start.go:297] selected driver: kvm2
	I0127 13:23:32.733954  528954 start.go:901] validating driver "kvm2" against &{Name:no-preload-325431 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-325431 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.116 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:23:32.734098  528954 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 13:23:32.734776  528954 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:23:32.734884  528954 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20317-466901/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 13:23:32.750482  528954 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 13:23:32.751028  528954 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 13:23:32.751081  528954 cni.go:84] Creating CNI manager for ""
	I0127 13:23:32.751165  528954 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 13:23:32.751218  528954 start.go:340] cluster config:
	{Name:no-preload-325431 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-325431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.116 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-hos
t Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:23:32.751414  528954 iso.go:125] acquiring lock: {Name:mkcc3db98c9d4661e75c49bd9b203d0232dff8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:23:32.754267  528954 out.go:177] * Starting "no-preload-325431" primary control-plane node in "no-preload-325431" cluster
	I0127 13:23:32.755613  528954 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 13:23:32.755730  528954 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/no-preload-325431/config.json ...
	I0127 13:23:32.755878  528954 cache.go:107] acquiring lock: {Name:mk0425a032ced4bdea57fd149bd1003ccc819b8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:23:32.755874  528954 cache.go:107] acquiring lock: {Name:mkf1e2d7a48534619b32d5198ef9090e83eaab37 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:23:32.755954  528954 cache.go:107] acquiring lock: {Name:mk39b81bdcfa7d1829955b77cfed02c1a3ca582a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:23:32.755981  528954 start.go:360] acquireMachinesLock for no-preload-325431: {Name:mke115b779db52cb0a5f0a05f83d5bad0a35c561 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 13:23:32.755957  528954 cache.go:107] acquiring lock: {Name:mk7cd8ee4a354ebea291b7a031d037adad6f4eab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:23:32.756005  528954 cache.go:115] /home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0127 13:23:32.755996  528954 cache.go:107] acquiring lock: {Name:mk79d5d01647144335c1aa4441c0442e89aa5919 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:23:32.756016  528954 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 153.625µs
	I0127 13:23:32.756043  528954 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0127 13:23:32.756020  528954 cache.go:107] acquiring lock: {Name:mkd93d04192eff91f8bfaec9535df9aa96f61b81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:23:32.756051  528954 start.go:364] duration metric: took 45.81µs to acquireMachinesLock for "no-preload-325431"
	I0127 13:23:32.756059  528954 cache.go:115] /home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0127 13:23:32.756075  528954 start.go:96] Skipping create...Using existing machine configuration
	I0127 13:23:32.756044  528954 cache.go:107] acquiring lock: {Name:mk47162761e1a477778394895affb07de499ad0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:23:32.756083  528954 fix.go:54] fixHost starting: 
	I0127 13:23:32.756077  528954 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 124.067µs
	I0127 13:23:32.756124  528954 cache.go:115] /home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 exists
	I0127 13:23:32.756130  528954 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0127 13:23:32.756051  528954 cache.go:115] /home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 exists
	I0127 13:23:32.756160  528954 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.1" -> "/home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1" took 163.897µs
	I0127 13:23:32.756176  528954 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.1 -> /home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 succeeded
	I0127 13:23:32.756157  528954 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.1" -> "/home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1" took 278.711µs
	I0127 13:23:32.756184  528954 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.1 -> /home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 succeeded
	I0127 13:23:32.756086  528954 cache.go:107] acquiring lock: {Name:mk56c8495b1b67a68bdb2cfb60d162b3dad1956a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:23:32.756185  528954 cache.go:115] /home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 exists
	I0127 13:23:32.756203  528954 cache.go:115] /home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 exists
	I0127 13:23:32.756225  528954 cache.go:115] /home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0127 13:23:32.756208  528954 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.1" -> "/home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1" took 221.537µs
	I0127 13:23:32.756223  528954 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.1" -> "/home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1" took 225.392µs
	I0127 13:23:32.756252  528954 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.1 -> /home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 succeeded
	I0127 13:23:32.756238  528954 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.1 -> /home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 succeeded
	I0127 13:23:32.756047  528954 cache.go:115] /home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 exists
	I0127 13:23:32.756274  528954 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0" took 333.138µs
	I0127 13:23:32.756284  528954 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 succeeded
	I0127 13:23:32.756237  528954 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 218.675µs
	I0127 13:23:32.756297  528954 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0127 13:23:32.756308  528954 cache.go:87] Successfully saved all images to host disk.
	I0127 13:23:32.756447  528954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:23:32.756496  528954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:23:32.771438  528954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33331
	I0127 13:23:32.771868  528954 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:23:32.772390  528954 main.go:141] libmachine: Using API Version  1
	I0127 13:23:32.772412  528954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:23:32.772771  528954 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:23:32.772983  528954 main.go:141] libmachine: (no-preload-325431) Calling .DriverName
	I0127 13:23:32.773187  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetState
	I0127 13:23:32.774651  528954 fix.go:112] recreateIfNeeded on no-preload-325431: state=Stopped err=<nil>
	I0127 13:23:32.774678  528954 main.go:141] libmachine: (no-preload-325431) Calling .DriverName
	W0127 13:23:32.774840  528954 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 13:23:32.776648  528954 out.go:177] * Restarting existing kvm2 VM for "no-preload-325431" ...
	I0127 13:23:32.777886  528954 main.go:141] libmachine: (no-preload-325431) Calling .Start
	I0127 13:23:32.778081  528954 main.go:141] libmachine: (no-preload-325431) starting domain...
	I0127 13:23:32.778104  528954 main.go:141] libmachine: (no-preload-325431) ensuring networks are active...
	I0127 13:23:32.778918  528954 main.go:141] libmachine: (no-preload-325431) Ensuring network default is active
	I0127 13:23:32.779290  528954 main.go:141] libmachine: (no-preload-325431) Ensuring network mk-no-preload-325431 is active
	I0127 13:23:32.779607  528954 main.go:141] libmachine: (no-preload-325431) getting domain XML...
	I0127 13:23:32.780385  528954 main.go:141] libmachine: (no-preload-325431) creating domain...
	I0127 13:23:34.002987  528954 main.go:141] libmachine: (no-preload-325431) waiting for IP...
	I0127 13:23:34.003812  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:34.004345  528954 main.go:141] libmachine: (no-preload-325431) DBG | unable to find current IP address of domain no-preload-325431 in network mk-no-preload-325431
	I0127 13:23:34.004417  528954 main.go:141] libmachine: (no-preload-325431) DBG | I0127 13:23:34.004308  528989 retry.go:31] will retry after 305.177483ms: waiting for domain to come up
	I0127 13:23:34.310911  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:34.311468  528954 main.go:141] libmachine: (no-preload-325431) DBG | unable to find current IP address of domain no-preload-325431 in network mk-no-preload-325431
	I0127 13:23:34.311494  528954 main.go:141] libmachine: (no-preload-325431) DBG | I0127 13:23:34.311430  528989 retry.go:31] will retry after 235.274048ms: waiting for domain to come up
	I0127 13:23:34.547991  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:34.548548  528954 main.go:141] libmachine: (no-preload-325431) DBG | unable to find current IP address of domain no-preload-325431 in network mk-no-preload-325431
	I0127 13:23:34.548572  528954 main.go:141] libmachine: (no-preload-325431) DBG | I0127 13:23:34.548525  528989 retry.go:31] will retry after 476.26083ms: waiting for domain to come up
	I0127 13:23:35.026210  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:35.026783  528954 main.go:141] libmachine: (no-preload-325431) DBG | unable to find current IP address of domain no-preload-325431 in network mk-no-preload-325431
	I0127 13:23:35.026842  528954 main.go:141] libmachine: (no-preload-325431) DBG | I0127 13:23:35.026737  528989 retry.go:31] will retry after 396.169606ms: waiting for domain to come up
	I0127 13:23:35.424533  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:35.425057  528954 main.go:141] libmachine: (no-preload-325431) DBG | unable to find current IP address of domain no-preload-325431 in network mk-no-preload-325431
	I0127 13:23:35.425090  528954 main.go:141] libmachine: (no-preload-325431) DBG | I0127 13:23:35.425012  528989 retry.go:31] will retry after 661.148493ms: waiting for domain to come up
	I0127 13:23:36.087979  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:36.088470  528954 main.go:141] libmachine: (no-preload-325431) DBG | unable to find current IP address of domain no-preload-325431 in network mk-no-preload-325431
	I0127 13:23:36.088531  528954 main.go:141] libmachine: (no-preload-325431) DBG | I0127 13:23:36.088422  528989 retry.go:31] will retry after 869.822406ms: waiting for domain to come up
	I0127 13:23:36.959478  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:36.959960  528954 main.go:141] libmachine: (no-preload-325431) DBG | unable to find current IP address of domain no-preload-325431 in network mk-no-preload-325431
	I0127 13:23:36.959992  528954 main.go:141] libmachine: (no-preload-325431) DBG | I0127 13:23:36.959884  528989 retry.go:31] will retry after 1.015846086s: waiting for domain to come up
	I0127 13:23:37.976977  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:37.977586  528954 main.go:141] libmachine: (no-preload-325431) DBG | unable to find current IP address of domain no-preload-325431 in network mk-no-preload-325431
	I0127 13:23:37.977613  528954 main.go:141] libmachine: (no-preload-325431) DBG | I0127 13:23:37.977563  528989 retry.go:31] will retry after 1.224150031s: waiting for domain to come up
	I0127 13:23:39.204085  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:39.204606  528954 main.go:141] libmachine: (no-preload-325431) DBG | unable to find current IP address of domain no-preload-325431 in network mk-no-preload-325431
	I0127 13:23:39.204630  528954 main.go:141] libmachine: (no-preload-325431) DBG | I0127 13:23:39.204582  528989 retry.go:31] will retry after 1.126383211s: waiting for domain to come up
	I0127 13:23:40.333113  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:40.333646  528954 main.go:141] libmachine: (no-preload-325431) DBG | unable to find current IP address of domain no-preload-325431 in network mk-no-preload-325431
	I0127 13:23:40.333676  528954 main.go:141] libmachine: (no-preload-325431) DBG | I0127 13:23:40.333606  528989 retry.go:31] will retry after 1.430102982s: waiting for domain to come up
	I0127 13:23:41.766362  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:41.766953  528954 main.go:141] libmachine: (no-preload-325431) DBG | unable to find current IP address of domain no-preload-325431 in network mk-no-preload-325431
	I0127 13:23:41.766983  528954 main.go:141] libmachine: (no-preload-325431) DBG | I0127 13:23:41.766915  528989 retry.go:31] will retry after 1.763139948s: waiting for domain to come up
	I0127 13:23:43.531472  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:43.532056  528954 main.go:141] libmachine: (no-preload-325431) DBG | unable to find current IP address of domain no-preload-325431 in network mk-no-preload-325431
	I0127 13:23:43.532087  528954 main.go:141] libmachine: (no-preload-325431) DBG | I0127 13:23:43.532004  528989 retry.go:31] will retry after 3.488533794s: waiting for domain to come up
	I0127 13:23:47.024796  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:47.025343  528954 main.go:141] libmachine: (no-preload-325431) DBG | unable to find current IP address of domain no-preload-325431 in network mk-no-preload-325431
	I0127 13:23:47.025366  528954 main.go:141] libmachine: (no-preload-325431) DBG | I0127 13:23:47.025297  528989 retry.go:31] will retry after 4.076884943s: waiting for domain to come up
	I0127 13:23:51.106703  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:51.107241  528954 main.go:141] libmachine: (no-preload-325431) found domain IP: 192.168.50.116
	I0127 13:23:51.107320  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has current primary IP address 192.168.50.116 and MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:51.107334  528954 main.go:141] libmachine: (no-preload-325431) reserving static IP address...
	I0127 13:23:51.107783  528954 main.go:141] libmachine: (no-preload-325431) reserved static IP address 192.168.50.116 for domain no-preload-325431
	I0127 13:23:51.107836  528954 main.go:141] libmachine: (no-preload-325431) DBG | found host DHCP lease matching {name: "no-preload-325431", mac: "52:54:00:0d:73:1e", ip: "192.168.50.116"} in network mk-no-preload-325431: {Iface:virbr2 ExpiryTime:2025-01-27 14:23:44 +0000 UTC Type:0 Mac:52:54:00:0d:73:1e Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:no-preload-325431 Clientid:01:52:54:00:0d:73:1e}
	I0127 13:23:51.107853  528954 main.go:141] libmachine: (no-preload-325431) waiting for SSH...
	I0127 13:23:51.107893  528954 main.go:141] libmachine: (no-preload-325431) DBG | skip adding static IP to network mk-no-preload-325431 - found existing host DHCP lease matching {name: "no-preload-325431", mac: "52:54:00:0d:73:1e", ip: "192.168.50.116"}
	I0127 13:23:51.107920  528954 main.go:141] libmachine: (no-preload-325431) DBG | Getting to WaitForSSH function...
	I0127 13:23:51.109777  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:51.110148  528954 main.go:141] libmachine: (no-preload-325431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:73:1e", ip: ""} in network mk-no-preload-325431: {Iface:virbr2 ExpiryTime:2025-01-27 14:23:44 +0000 UTC Type:0 Mac:52:54:00:0d:73:1e Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:no-preload-325431 Clientid:01:52:54:00:0d:73:1e}
	I0127 13:23:51.110186  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined IP address 192.168.50.116 and MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:51.110304  528954 main.go:141] libmachine: (no-preload-325431) DBG | Using SSH client type: external
	I0127 13:23:51.110347  528954 main.go:141] libmachine: (no-preload-325431) DBG | Using SSH private key: /home/jenkins/minikube-integration/20317-466901/.minikube/machines/no-preload-325431/id_rsa (-rw-------)
	I0127 13:23:51.110383  528954 main.go:141] libmachine: (no-preload-325431) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20317-466901/.minikube/machines/no-preload-325431/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 13:23:51.110400  528954 main.go:141] libmachine: (no-preload-325431) DBG | About to run SSH command:
	I0127 13:23:51.110405  528954 main.go:141] libmachine: (no-preload-325431) DBG | exit 0
	I0127 13:23:51.231743  528954 main.go:141] libmachine: (no-preload-325431) DBG | SSH cmd err, output: <nil>: 
	I0127 13:23:51.232168  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetConfigRaw
	I0127 13:23:51.232942  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetIP
	I0127 13:23:51.235364  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:51.235732  528954 main.go:141] libmachine: (no-preload-325431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:73:1e", ip: ""} in network mk-no-preload-325431: {Iface:virbr2 ExpiryTime:2025-01-27 14:23:44 +0000 UTC Type:0 Mac:52:54:00:0d:73:1e Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:no-preload-325431 Clientid:01:52:54:00:0d:73:1e}
	I0127 13:23:51.235764  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined IP address 192.168.50.116 and MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:51.236036  528954 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/no-preload-325431/config.json ...
	I0127 13:23:51.236240  528954 machine.go:93] provisionDockerMachine start ...
	I0127 13:23:51.236260  528954 main.go:141] libmachine: (no-preload-325431) Calling .DriverName
	I0127 13:23:51.236474  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHHostname
	I0127 13:23:51.238669  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:51.239024  528954 main.go:141] libmachine: (no-preload-325431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:73:1e", ip: ""} in network mk-no-preload-325431: {Iface:virbr2 ExpiryTime:2025-01-27 14:23:44 +0000 UTC Type:0 Mac:52:54:00:0d:73:1e Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:no-preload-325431 Clientid:01:52:54:00:0d:73:1e}
	I0127 13:23:51.239046  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined IP address 192.168.50.116 and MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:51.239167  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHPort
	I0127 13:23:51.239363  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHKeyPath
	I0127 13:23:51.239524  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHKeyPath
	I0127 13:23:51.239660  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHUsername
	I0127 13:23:51.239821  528954 main.go:141] libmachine: Using SSH client type: native
	I0127 13:23:51.240084  528954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0127 13:23:51.240101  528954 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 13:23:51.339684  528954 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 13:23:51.339718  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetMachineName
	I0127 13:23:51.339991  528954 buildroot.go:166] provisioning hostname "no-preload-325431"
	I0127 13:23:51.340016  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetMachineName
	I0127 13:23:51.340239  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHHostname
	I0127 13:23:51.342805  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:51.343121  528954 main.go:141] libmachine: (no-preload-325431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:73:1e", ip: ""} in network mk-no-preload-325431: {Iface:virbr2 ExpiryTime:2025-01-27 14:23:44 +0000 UTC Type:0 Mac:52:54:00:0d:73:1e Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:no-preload-325431 Clientid:01:52:54:00:0d:73:1e}
	I0127 13:23:51.343171  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined IP address 192.168.50.116 and MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:51.343322  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHPort
	I0127 13:23:51.343528  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHKeyPath
	I0127 13:23:51.343679  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHKeyPath
	I0127 13:23:51.343796  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHUsername
	I0127 13:23:51.343932  528954 main.go:141] libmachine: Using SSH client type: native
	I0127 13:23:51.344170  528954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0127 13:23:51.344188  528954 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-325431 && echo "no-preload-325431" | sudo tee /etc/hostname
	I0127 13:23:51.458207  528954 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-325431
	
	I0127 13:23:51.458243  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHHostname
	I0127 13:23:51.460975  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:51.461420  528954 main.go:141] libmachine: (no-preload-325431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:73:1e", ip: ""} in network mk-no-preload-325431: {Iface:virbr2 ExpiryTime:2025-01-27 14:23:44 +0000 UTC Type:0 Mac:52:54:00:0d:73:1e Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:no-preload-325431 Clientid:01:52:54:00:0d:73:1e}
	I0127 13:23:51.461456  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined IP address 192.168.50.116 and MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:51.461633  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHPort
	I0127 13:23:51.461847  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHKeyPath
	I0127 13:23:51.462003  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHKeyPath
	I0127 13:23:51.462134  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHUsername
	I0127 13:23:51.462324  528954 main.go:141] libmachine: Using SSH client type: native
	I0127 13:23:51.462512  528954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0127 13:23:51.462528  528954 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-325431' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-325431/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-325431' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 13:23:51.573171  528954 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 13:23:51.573209  528954 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20317-466901/.minikube CaCertPath:/home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20317-466901/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20317-466901/.minikube}
	I0127 13:23:51.573229  528954 buildroot.go:174] setting up certificates
	I0127 13:23:51.573242  528954 provision.go:84] configureAuth start
	I0127 13:23:51.573250  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetMachineName
	I0127 13:23:51.573567  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetIP
	I0127 13:23:51.576532  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:51.576940  528954 main.go:141] libmachine: (no-preload-325431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:73:1e", ip: ""} in network mk-no-preload-325431: {Iface:virbr2 ExpiryTime:2025-01-27 14:23:44 +0000 UTC Type:0 Mac:52:54:00:0d:73:1e Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:no-preload-325431 Clientid:01:52:54:00:0d:73:1e}
	I0127 13:23:51.576962  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined IP address 192.168.50.116 and MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:51.577105  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHHostname
	I0127 13:23:51.579172  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:51.579599  528954 main.go:141] libmachine: (no-preload-325431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:73:1e", ip: ""} in network mk-no-preload-325431: {Iface:virbr2 ExpiryTime:2025-01-27 14:23:44 +0000 UTC Type:0 Mac:52:54:00:0d:73:1e Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:no-preload-325431 Clientid:01:52:54:00:0d:73:1e}
	I0127 13:23:51.579649  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined IP address 192.168.50.116 and MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:51.579746  528954 provision.go:143] copyHostCerts
	I0127 13:23:51.579813  528954 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-466901/.minikube/ca.pem, removing ...
	I0127 13:23:51.579824  528954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-466901/.minikube/ca.pem
	I0127 13:23:51.579910  528954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20317-466901/.minikube/ca.pem (1082 bytes)
	I0127 13:23:51.580023  528954 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-466901/.minikube/cert.pem, removing ...
	I0127 13:23:51.580032  528954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-466901/.minikube/cert.pem
	I0127 13:23:51.580057  528954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20317-466901/.minikube/cert.pem (1123 bytes)
	I0127 13:23:51.580129  528954 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-466901/.minikube/key.pem, removing ...
	I0127 13:23:51.580138  528954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-466901/.minikube/key.pem
	I0127 13:23:51.580160  528954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20317-466901/.minikube/key.pem (1675 bytes)
	I0127 13:23:51.580224  528954 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20317-466901/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca-key.pem org=jenkins.no-preload-325431 san=[127.0.0.1 192.168.50.116 localhost minikube no-preload-325431]
	I0127 13:23:51.922420  528954 provision.go:177] copyRemoteCerts
	I0127 13:23:51.922496  528954 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 13:23:51.922524  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHHostname
	I0127 13:23:51.925590  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:51.926010  528954 main.go:141] libmachine: (no-preload-325431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:73:1e", ip: ""} in network mk-no-preload-325431: {Iface:virbr2 ExpiryTime:2025-01-27 14:23:44 +0000 UTC Type:0 Mac:52:54:00:0d:73:1e Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:no-preload-325431 Clientid:01:52:54:00:0d:73:1e}
	I0127 13:23:51.926039  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined IP address 192.168.50.116 and MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:51.926360  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHPort
	I0127 13:23:51.926586  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHKeyPath
	I0127 13:23:51.926759  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHUsername
	I0127 13:23:51.926890  528954 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/no-preload-325431/id_rsa Username:docker}
	I0127 13:23:52.005993  528954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0127 13:23:52.032651  528954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0127 13:23:52.058042  528954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 13:23:52.083128  528954 provision.go:87] duration metric: took 509.868537ms to configureAuth
	I0127 13:23:52.083184  528954 buildroot.go:189] setting minikube options for container-runtime
	I0127 13:23:52.083429  528954 config.go:182] Loaded profile config "no-preload-325431": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 13:23:52.083447  528954 machine.go:96] duration metric: took 847.194107ms to provisionDockerMachine
	I0127 13:23:52.083457  528954 start.go:293] postStartSetup for "no-preload-325431" (driver="kvm2")
	I0127 13:23:52.083467  528954 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 13:23:52.083513  528954 main.go:141] libmachine: (no-preload-325431) Calling .DriverName
	I0127 13:23:52.083855  528954 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 13:23:52.083886  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHHostname
	I0127 13:23:52.086710  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:52.087095  528954 main.go:141] libmachine: (no-preload-325431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:73:1e", ip: ""} in network mk-no-preload-325431: {Iface:virbr2 ExpiryTime:2025-01-27 14:23:44 +0000 UTC Type:0 Mac:52:54:00:0d:73:1e Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:no-preload-325431 Clientid:01:52:54:00:0d:73:1e}
	I0127 13:23:52.087130  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined IP address 192.168.50.116 and MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:52.087342  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHPort
	I0127 13:23:52.087538  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHKeyPath
	I0127 13:23:52.087695  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHUsername
	I0127 13:23:52.087844  528954 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/no-preload-325431/id_rsa Username:docker}
	I0127 13:23:52.170584  528954 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 13:23:52.175597  528954 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 13:23:52.175631  528954 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-466901/.minikube/addons for local assets ...
	I0127 13:23:52.175710  528954 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-466901/.minikube/files for local assets ...
	I0127 13:23:52.175824  528954 filesync.go:149] local asset: /home/jenkins/minikube-integration/20317-466901/.minikube/files/etc/ssl/certs/4742752.pem -> 4742752.pem in /etc/ssl/certs
	I0127 13:23:52.175958  528954 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 13:23:52.186548  528954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/files/etc/ssl/certs/4742752.pem --> /etc/ssl/certs/4742752.pem (1708 bytes)
	I0127 13:23:52.216400  528954 start.go:296] duration metric: took 132.926627ms for postStartSetup
	I0127 13:23:52.216447  528954 fix.go:56] duration metric: took 19.460365477s for fixHost
	I0127 13:23:52.216475  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHHostname
	I0127 13:23:52.219697  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:52.220053  528954 main.go:141] libmachine: (no-preload-325431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:73:1e", ip: ""} in network mk-no-preload-325431: {Iface:virbr2 ExpiryTime:2025-01-27 14:23:44 +0000 UTC Type:0 Mac:52:54:00:0d:73:1e Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:no-preload-325431 Clientid:01:52:54:00:0d:73:1e}
	I0127 13:23:52.220088  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined IP address 192.168.50.116 and MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:52.220300  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHPort
	I0127 13:23:52.220564  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHKeyPath
	I0127 13:23:52.220765  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHKeyPath
	I0127 13:23:52.220919  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHUsername
	I0127 13:23:52.221095  528954 main.go:141] libmachine: Using SSH client type: native
	I0127 13:23:52.221263  528954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0127 13:23:52.221273  528954 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 13:23:52.320460  528954 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737984232.292373848
	
	I0127 13:23:52.320493  528954 fix.go:216] guest clock: 1737984232.292373848
	I0127 13:23:52.320500  528954 fix.go:229] Guest: 2025-01-27 13:23:52.292373848 +0000 UTC Remote: 2025-01-27 13:23:52.216451375 +0000 UTC m=+19.611033029 (delta=75.922473ms)
	I0127 13:23:52.320558  528954 fix.go:200] guest clock delta is within tolerance: 75.922473ms
	I0127 13:23:52.320565  528954 start.go:83] releasing machines lock for "no-preload-325431", held for 19.564499359s
	I0127 13:23:52.320592  528954 main.go:141] libmachine: (no-preload-325431) Calling .DriverName
	I0127 13:23:52.320893  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetIP
	I0127 13:23:52.323712  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:52.324056  528954 main.go:141] libmachine: (no-preload-325431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:73:1e", ip: ""} in network mk-no-preload-325431: {Iface:virbr2 ExpiryTime:2025-01-27 14:23:44 +0000 UTC Type:0 Mac:52:54:00:0d:73:1e Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:no-preload-325431 Clientid:01:52:54:00:0d:73:1e}
	I0127 13:23:52.324093  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined IP address 192.168.50.116 and MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:52.324255  528954 main.go:141] libmachine: (no-preload-325431) Calling .DriverName
	I0127 13:23:52.324958  528954 main.go:141] libmachine: (no-preload-325431) Calling .DriverName
	I0127 13:23:52.325177  528954 main.go:141] libmachine: (no-preload-325431) Calling .DriverName
	I0127 13:23:52.325269  528954 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 13:23:52.325320  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHHostname
	I0127 13:23:52.325420  528954 ssh_runner.go:195] Run: cat /version.json
	I0127 13:23:52.325450  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHHostname
	I0127 13:23:52.327983  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:52.328295  528954 main.go:141] libmachine: (no-preload-325431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:73:1e", ip: ""} in network mk-no-preload-325431: {Iface:virbr2 ExpiryTime:2025-01-27 14:23:44 +0000 UTC Type:0 Mac:52:54:00:0d:73:1e Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:no-preload-325431 Clientid:01:52:54:00:0d:73:1e}
	I0127 13:23:52.328327  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined IP address 192.168.50.116 and MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:52.328348  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:52.328455  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHPort
	I0127 13:23:52.328647  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHKeyPath
	I0127 13:23:52.328805  528954 main.go:141] libmachine: (no-preload-325431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:73:1e", ip: ""} in network mk-no-preload-325431: {Iface:virbr2 ExpiryTime:2025-01-27 14:23:44 +0000 UTC Type:0 Mac:52:54:00:0d:73:1e Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:no-preload-325431 Clientid:01:52:54:00:0d:73:1e}
	I0127 13:23:52.328806  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHUsername
	I0127 13:23:52.328823  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined IP address 192.168.50.116 and MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:52.328997  528954 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/no-preload-325431/id_rsa Username:docker}
	I0127 13:23:52.329018  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHPort
	I0127 13:23:52.329160  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHKeyPath
	I0127 13:23:52.329309  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHUsername
	I0127 13:23:52.329462  528954 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/no-preload-325431/id_rsa Username:docker}
	I0127 13:23:52.404857  528954 ssh_runner.go:195] Run: systemctl --version
	I0127 13:23:52.424319  528954 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 13:23:52.430464  528954 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 13:23:52.430530  528954 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 13:23:52.446616  528954 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 13:23:52.446646  528954 start.go:495] detecting cgroup driver to use...
	I0127 13:23:52.446712  528954 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 13:23:52.474253  528954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 13:23:52.488807  528954 docker.go:217] disabling cri-docker service (if available) ...
	I0127 13:23:52.488890  528954 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 13:23:52.503411  528954 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 13:23:52.518307  528954 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 13:23:52.630669  528954 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 13:23:52.782751  528954 docker.go:233] disabling docker service ...
	I0127 13:23:52.782837  528954 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 13:23:52.797543  528954 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 13:23:52.812115  528954 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 13:23:52.936326  528954 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 13:23:53.057723  528954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 13:23:53.072402  528954 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 13:23:53.091539  528954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 13:23:53.102146  528954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 13:23:53.112415  528954 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 13:23:53.112479  528954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 13:23:53.123126  528954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 13:23:53.134311  528954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 13:23:53.145193  528954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 13:23:53.156130  528954 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 13:23:53.167195  528954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 13:23:53.178035  528954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 13:23:53.188548  528954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 13:23:53.199463  528954 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 13:23:53.209469  528954 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 13:23:53.209534  528954 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 13:23:53.224391  528954 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 13:23:53.234772  528954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:23:53.350801  528954 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 13:23:53.380104  528954 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0127 13:23:53.380179  528954 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 13:23:53.385238  528954 retry.go:31] will retry after 1.052994237s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0127 13:23:54.438481  528954 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 13:23:54.444330  528954 start.go:563] Will wait 60s for crictl version
	I0127 13:23:54.444395  528954 ssh_runner.go:195] Run: which crictl
	I0127 13:23:54.448559  528954 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 13:23:54.489223  528954 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0127 13:23:54.489301  528954 ssh_runner.go:195] Run: containerd --version
	I0127 13:23:54.515421  528954 ssh_runner.go:195] Run: containerd --version
	I0127 13:23:54.544703  528954 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	I0127 13:23:54.545920  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetIP
	I0127 13:23:54.548686  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:54.549043  528954 main.go:141] libmachine: (no-preload-325431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:73:1e", ip: ""} in network mk-no-preload-325431: {Iface:virbr2 ExpiryTime:2025-01-27 14:23:44 +0000 UTC Type:0 Mac:52:54:00:0d:73:1e Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:no-preload-325431 Clientid:01:52:54:00:0d:73:1e}
	I0127 13:23:54.549075  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined IP address 192.168.50.116 and MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:23:54.549338  528954 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0127 13:23:54.554275  528954 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:23:54.567128  528954 kubeadm.go:883] updating cluster {Name:no-preload-325431 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-325431 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.116 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 13:23:54.567304  528954 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 13:23:54.567358  528954 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:23:54.605284  528954 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 13:23:54.605318  528954 cache_images.go:84] Images are preloaded, skipping loading
	I0127 13:23:54.605328  528954 kubeadm.go:934] updating node { 192.168.50.116 8443 v1.32.1 containerd true true} ...
	I0127 13:23:54.605459  528954 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-325431 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:no-preload-325431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 13:23:54.605536  528954 ssh_runner.go:195] Run: sudo crictl info
	I0127 13:23:54.641877  528954 cni.go:84] Creating CNI manager for ""
	I0127 13:23:54.641902  528954 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 13:23:54.641913  528954 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 13:23:54.641935  528954 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.116 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-325431 NodeName:no-preload-325431 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 13:23:54.642062  528954 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-325431"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.116"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.116"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 13:23:54.642146  528954 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 13:23:54.652780  528954 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 13:23:54.652853  528954 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 13:23:54.662470  528954 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0127 13:23:54.680212  528954 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 13:23:54.697880  528954 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2313 bytes)
	I0127 13:23:54.715864  528954 ssh_runner.go:195] Run: grep 192.168.50.116	control-plane.minikube.internal$ /etc/hosts
	I0127 13:23:54.719880  528954 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:23:54.732808  528954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:23:54.847512  528954 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:23:54.867242  528954 certs.go:68] Setting up /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/no-preload-325431 for IP: 192.168.50.116
	I0127 13:23:54.867288  528954 certs.go:194] generating shared ca certs ...
	I0127 13:23:54.867312  528954 certs.go:226] acquiring lock for ca certs: {Name:mk60f2aac78eb363c5e06a00675357d94c0df88d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:23:54.867512  528954 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20317-466901/.minikube/ca.key
	I0127 13:23:54.867569  528954 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20317-466901/.minikube/proxy-client-ca.key
	I0127 13:23:54.867590  528954 certs.go:256] generating profile certs ...
	I0127 13:23:54.867717  528954 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/no-preload-325431/client.key
	I0127 13:23:54.867803  528954 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/no-preload-325431/apiserver.key.00944cb6
	I0127 13:23:54.867870  528954 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/no-preload-325431/proxy-client.key
	I0127 13:23:54.868039  528954 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/474275.pem (1338 bytes)
	W0127 13:23:54.868090  528954 certs.go:480] ignoring /home/jenkins/minikube-integration/20317-466901/.minikube/certs/474275_empty.pem, impossibly tiny 0 bytes
	I0127 13:23:54.868103  528954 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 13:23:54.868137  528954 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem (1082 bytes)
	I0127 13:23:54.868169  528954 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/cert.pem (1123 bytes)
	I0127 13:23:54.868205  528954 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/key.pem (1675 bytes)
	I0127 13:23:54.868260  528954 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/files/etc/ssl/certs/4742752.pem (1708 bytes)
	I0127 13:23:54.868948  528954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 13:23:54.916286  528954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 13:23:54.951595  528954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 13:23:54.985978  528954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 13:23:55.017210  528954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/no-preload-325431/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0127 13:23:55.046840  528954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/no-preload-325431/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 13:23:55.080541  528954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/no-preload-325431/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 13:23:55.107806  528954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/no-preload-325431/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 13:23:55.134194  528954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/files/etc/ssl/certs/4742752.pem --> /usr/share/ca-certificates/4742752.pem (1708 bytes)
	I0127 13:23:55.158899  528954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 13:23:55.183077  528954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/certs/474275.pem --> /usr/share/ca-certificates/474275.pem (1338 bytes)
	I0127 13:23:55.208128  528954 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 13:23:55.225606  528954 ssh_runner.go:195] Run: openssl version
	I0127 13:23:55.231583  528954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/474275.pem && ln -fs /usr/share/ca-certificates/474275.pem /etc/ssl/certs/474275.pem"
	I0127 13:23:55.242957  528954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/474275.pem
	I0127 13:23:55.247769  528954 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 12:18 /usr/share/ca-certificates/474275.pem
	I0127 13:23:55.247833  528954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/474275.pem
	I0127 13:23:55.253810  528954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/474275.pem /etc/ssl/certs/51391683.0"
	I0127 13:23:55.264734  528954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4742752.pem && ln -fs /usr/share/ca-certificates/4742752.pem /etc/ssl/certs/4742752.pem"
	I0127 13:23:55.275229  528954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4742752.pem
	I0127 13:23:55.279764  528954 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 12:18 /usr/share/ca-certificates/4742752.pem
	I0127 13:23:55.279820  528954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4742752.pem
	I0127 13:23:55.285356  528954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4742752.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 13:23:55.296693  528954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 13:23:55.307430  528954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:23:55.311970  528954 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 12:10 /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:23:55.312034  528954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:23:55.317751  528954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 13:23:55.328528  528954 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 13:23:55.333031  528954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 13:23:55.339165  528954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 13:23:55.345030  528954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 13:23:55.351000  528954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 13:23:55.357110  528954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 13:23:55.362931  528954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 13:23:55.368994  528954 kubeadm.go:392] StartCluster: {Name:no-preload-325431 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-325431 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.116 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:23:55.369085  528954 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0127 13:23:55.369182  528954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:23:55.408100  528954 cri.go:89] found id: "c68fbcf444499cb39d7294187cae28b551fc12a41a7d7575e7d1421329e25bbf"
	I0127 13:23:55.408125  528954 cri.go:89] found id: "0a4c5593d50184f31756d9cbe22f35cc64a4493696b220236fc8cd336bea80c9"
	I0127 13:23:55.408128  528954 cri.go:89] found id: "d5f44fa632f1ba0498db2368e5d356b6d42d159b21888ca6f03c332776dd90a3"
	I0127 13:23:55.408131  528954 cri.go:89] found id: "dc49ce7264abdab1ac0f54d3d0bca6e69e0ade71ae40edcef409d23840d99e10"
	I0127 13:23:55.408136  528954 cri.go:89] found id: "223a386bbf08aebb7e1c728f9978424cab03999be2b38aec285e563dee72ad18"
	I0127 13:23:55.408138  528954 cri.go:89] found id: "ec1138b13b4d8060d23ab37dfadff9c7a064e3ffd21cbff59c3b64d7a18e088d"
	I0127 13:23:55.408141  528954 cri.go:89] found id: "996af3e916c92d0d5e13ae41c60e5e4563818028ed964075bff55b000dfbfad2"
	I0127 13:23:55.408144  528954 cri.go:89] found id: ""
	I0127 13:23:55.408189  528954 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0127 13:23:55.423754  528954 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-27T13:23:55Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0127 13:23:55.423854  528954 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 13:23:55.434276  528954 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 13:23:55.434299  528954 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 13:23:55.434350  528954 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 13:23:55.444034  528954 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 13:23:55.445020  528954 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-325431" does not appear in /home/jenkins/minikube-integration/20317-466901/kubeconfig
	I0127 13:23:55.445645  528954 kubeconfig.go:62] /home/jenkins/minikube-integration/20317-466901/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-325431" cluster setting kubeconfig missing "no-preload-325431" context setting]
	I0127 13:23:55.446625  528954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-466901/kubeconfig: {Name:mkc116eec378af43ea8fefe45e11af3e19be85bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:23:55.448630  528954 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 13:23:55.458286  528954 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.116
	I0127 13:23:55.458319  528954 kubeadm.go:1160] stopping kube-system containers ...
	I0127 13:23:55.458337  528954 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0127 13:23:55.458408  528954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:23:55.499889  528954 cri.go:89] found id: "c68fbcf444499cb39d7294187cae28b551fc12a41a7d7575e7d1421329e25bbf"
	I0127 13:23:55.499914  528954 cri.go:89] found id: "0a4c5593d50184f31756d9cbe22f35cc64a4493696b220236fc8cd336bea80c9"
	I0127 13:23:55.499920  528954 cri.go:89] found id: "d5f44fa632f1ba0498db2368e5d356b6d42d159b21888ca6f03c332776dd90a3"
	I0127 13:23:55.499926  528954 cri.go:89] found id: "dc49ce7264abdab1ac0f54d3d0bca6e69e0ade71ae40edcef409d23840d99e10"
	I0127 13:23:55.499930  528954 cri.go:89] found id: "223a386bbf08aebb7e1c728f9978424cab03999be2b38aec285e563dee72ad18"
	I0127 13:23:55.499941  528954 cri.go:89] found id: "ec1138b13b4d8060d23ab37dfadff9c7a064e3ffd21cbff59c3b64d7a18e088d"
	I0127 13:23:55.499945  528954 cri.go:89] found id: "996af3e916c92d0d5e13ae41c60e5e4563818028ed964075bff55b000dfbfad2"
	I0127 13:23:55.499948  528954 cri.go:89] found id: ""
	I0127 13:23:55.499956  528954 cri.go:252] Stopping containers: [c68fbcf444499cb39d7294187cae28b551fc12a41a7d7575e7d1421329e25bbf 0a4c5593d50184f31756d9cbe22f35cc64a4493696b220236fc8cd336bea80c9 d5f44fa632f1ba0498db2368e5d356b6d42d159b21888ca6f03c332776dd90a3 dc49ce7264abdab1ac0f54d3d0bca6e69e0ade71ae40edcef409d23840d99e10 223a386bbf08aebb7e1c728f9978424cab03999be2b38aec285e563dee72ad18 ec1138b13b4d8060d23ab37dfadff9c7a064e3ffd21cbff59c3b64d7a18e088d 996af3e916c92d0d5e13ae41c60e5e4563818028ed964075bff55b000dfbfad2]
	I0127 13:23:55.500016  528954 ssh_runner.go:195] Run: which crictl
	I0127 13:23:55.504252  528954 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 c68fbcf444499cb39d7294187cae28b551fc12a41a7d7575e7d1421329e25bbf 0a4c5593d50184f31756d9cbe22f35cc64a4493696b220236fc8cd336bea80c9 d5f44fa632f1ba0498db2368e5d356b6d42d159b21888ca6f03c332776dd90a3 dc49ce7264abdab1ac0f54d3d0bca6e69e0ade71ae40edcef409d23840d99e10 223a386bbf08aebb7e1c728f9978424cab03999be2b38aec285e563dee72ad18 ec1138b13b4d8060d23ab37dfadff9c7a064e3ffd21cbff59c3b64d7a18e088d 996af3e916c92d0d5e13ae41c60e5e4563818028ed964075bff55b000dfbfad2
	I0127 13:23:55.543959  528954 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 13:23:55.561290  528954 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:23:55.571243  528954 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:23:55.571286  528954 kubeadm.go:157] found existing configuration files:
	
	I0127 13:23:55.571341  528954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:23:55.580728  528954 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:23:55.580802  528954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:23:55.590469  528954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:23:55.599442  528954 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:23:55.599505  528954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:23:55.608639  528954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:23:55.617810  528954 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:23:55.617866  528954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:23:55.627449  528954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:23:55.636352  528954 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:23:55.636414  528954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:23:55.646169  528954 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 13:23:55.655678  528954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:23:55.781022  528954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:23:56.984649  528954 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.20358425s)
	I0127 13:23:56.984691  528954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:23:57.193584  528954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:23:57.283053  528954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:23:57.356286  528954 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:23:57.356415  528954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:23:57.856971  528954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:23:58.357257  528954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:23:58.857175  528954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:23:58.873326  528954 api_server.go:72] duration metric: took 1.517043726s to wait for apiserver process to appear ...
	I0127 13:23:58.873352  528954 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:23:58.873375  528954 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I0127 13:24:00.973587  528954 api_server.go:279] https://192.168.50.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 13:24:00.973620  528954 api_server.go:103] status: https://192.168.50.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 13:24:00.973641  528954 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I0127 13:24:01.002147  528954 api_server.go:279] https://192.168.50.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 13:24:01.002185  528954 api_server.go:103] status: https://192.168.50.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 13:24:01.373719  528954 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I0127 13:24:01.378715  528954 api_server.go:279] https://192.168.50.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:24:01.378743  528954 api_server.go:103] status: https://192.168.50.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:24:01.874416  528954 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I0127 13:24:01.880211  528954 api_server.go:279] https://192.168.50.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:24:01.880238  528954 api_server.go:103] status: https://192.168.50.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:24:02.373621  528954 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I0127 13:24:02.379055  528954 api_server.go:279] https://192.168.50.116:8443/healthz returned 200:
	ok
	I0127 13:24:02.387805  528954 api_server.go:141] control plane version: v1.32.1
	I0127 13:24:02.387834  528954 api_server.go:131] duration metric: took 3.514474808s to wait for apiserver health ...
	I0127 13:24:02.387843  528954 cni.go:84] Creating CNI manager for ""
	I0127 13:24:02.387850  528954 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 13:24:02.389582  528954 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 13:24:02.391147  528954 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 13:24:02.406580  528954 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 13:24:02.436722  528954 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:24:02.481735  528954 system_pods.go:59] 8 kube-system pods found
	I0127 13:24:02.481792  528954 system_pods.go:61] "coredns-668d6bf9bc-bf8dx" [17e4173a-79c1-4a5b-be36-b1bd729f60ea] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 13:24:02.481817  528954 system_pods.go:61] "etcd-no-preload-325431" [d6e0d509-1ce1-403f-b611-ea6aafe35cb6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 13:24:02.481828  528954 system_pods.go:61] "kube-apiserver-no-preload-325431" [a389cfe9-f329-492d-bde1-060abc8566b1] Running
	I0127 13:24:02.481849  528954 system_pods.go:61] "kube-controller-manager-no-preload-325431" [cc0b544b-4e68-42e2-a648-8169e71b3dab] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 13:24:02.481859  528954 system_pods.go:61] "kube-proxy-l848r" [f21a5889-e77f-4758-85b4-4a3690aa5ac5] Running
	I0127 13:24:02.481865  528954 system_pods.go:61] "kube-scheduler-no-preload-325431" [458c9ea7-9b2d-4f95-8327-95a1d758b6d4] Running
	I0127 13:24:02.481876  528954 system_pods.go:61] "metrics-server-f79f97bbb-8xzvp" [4697d44a-38ad-4036-b70d-9b1adb06b4fd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 13:24:02.481889  528954 system_pods.go:61] "storage-provisioner" [2438a5ef-b375-4b61-8e3c-d06546af3cf3] Running
	I0127 13:24:02.481898  528954 system_pods.go:74] duration metric: took 45.15227ms to wait for pod list to return data ...
	I0127 13:24:02.481913  528954 node_conditions.go:102] verifying NodePressure condition ...
	I0127 13:24:02.487615  528954 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 13:24:02.487650  528954 node_conditions.go:123] node cpu capacity is 2
	I0127 13:24:02.487665  528954 node_conditions.go:105] duration metric: took 5.744059ms to run NodePressure ...
	I0127 13:24:02.487690  528954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:24:02.953730  528954 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0127 13:24:02.962714  528954 kubeadm.go:739] kubelet initialised
	I0127 13:24:02.962743  528954 kubeadm.go:740] duration metric: took 8.973475ms waiting for restarted kubelet to initialise ...
	I0127 13:24:02.962754  528954 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:24:03.064260  528954 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-bf8dx" in "kube-system" namespace to be "Ready" ...
	I0127 13:24:05.071758  528954 pod_ready.go:103] pod "coredns-668d6bf9bc-bf8dx" in "kube-system" namespace has status "Ready":"False"
	I0127 13:24:07.570901  528954 pod_ready.go:103] pod "coredns-668d6bf9bc-bf8dx" in "kube-system" namespace has status "Ready":"False"
	I0127 13:24:08.071180  528954 pod_ready.go:93] pod "coredns-668d6bf9bc-bf8dx" in "kube-system" namespace has status "Ready":"True"
	I0127 13:24:08.071211  528954 pod_ready.go:82] duration metric: took 5.006908748s for pod "coredns-668d6bf9bc-bf8dx" in "kube-system" namespace to be "Ready" ...
	I0127 13:24:08.071222  528954 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-325431" in "kube-system" namespace to be "Ready" ...
	I0127 13:24:10.082734  528954 pod_ready.go:103] pod "etcd-no-preload-325431" in "kube-system" namespace has status "Ready":"False"
	I0127 13:24:12.578331  528954 pod_ready.go:103] pod "etcd-no-preload-325431" in "kube-system" namespace has status "Ready":"False"
	I0127 13:24:14.579534  528954 pod_ready.go:103] pod "etcd-no-preload-325431" in "kube-system" namespace has status "Ready":"False"
	I0127 13:24:15.577894  528954 pod_ready.go:93] pod "etcd-no-preload-325431" in "kube-system" namespace has status "Ready":"True"
	I0127 13:24:15.577926  528954 pod_ready.go:82] duration metric: took 7.506694818s for pod "etcd-no-preload-325431" in "kube-system" namespace to be "Ready" ...
	I0127 13:24:15.577940  528954 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-325431" in "kube-system" namespace to be "Ready" ...
	I0127 13:24:16.585574  528954 pod_ready.go:93] pod "kube-apiserver-no-preload-325431" in "kube-system" namespace has status "Ready":"True"
	I0127 13:24:16.585599  528954 pod_ready.go:82] duration metric: took 1.007650863s for pod "kube-apiserver-no-preload-325431" in "kube-system" namespace to be "Ready" ...
	I0127 13:24:16.585610  528954 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-325431" in "kube-system" namespace to be "Ready" ...
	I0127 13:24:16.591406  528954 pod_ready.go:93] pod "kube-controller-manager-no-preload-325431" in "kube-system" namespace has status "Ready":"True"
	I0127 13:24:16.591436  528954 pod_ready.go:82] duration metric: took 5.818528ms for pod "kube-controller-manager-no-preload-325431" in "kube-system" namespace to be "Ready" ...
	I0127 13:24:16.591452  528954 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-l848r" in "kube-system" namespace to be "Ready" ...
	I0127 13:24:16.598966  528954 pod_ready.go:93] pod "kube-proxy-l848r" in "kube-system" namespace has status "Ready":"True"
	I0127 13:24:16.598993  528954 pod_ready.go:82] duration metric: took 7.533761ms for pod "kube-proxy-l848r" in "kube-system" namespace to be "Ready" ...
	I0127 13:24:16.599003  528954 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-325431" in "kube-system" namespace to be "Ready" ...
	I0127 13:24:17.605675  528954 pod_ready.go:93] pod "kube-scheduler-no-preload-325431" in "kube-system" namespace has status "Ready":"True"
	I0127 13:24:17.605704  528954 pod_ready.go:82] duration metric: took 1.006693331s for pod "kube-scheduler-no-preload-325431" in "kube-system" namespace to be "Ready" ...
	I0127 13:24:17.605715  528954 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace to be "Ready" ...
	I0127 13:24:19.613122  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:24:21.613411  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:24:24.121198  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:24:26.613281  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:24:29.116476  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:24:31.614314  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:24:33.617135  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:24:36.113377  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:24:38.113918  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:24:40.614527  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:24:43.112343  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:24:45.113298  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:24:47.611855  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:24:49.612730  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:24:52.113290  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:24:54.113822  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:24:56.115084  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:24:58.614721  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:00.615475  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:03.114539  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:05.614785  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:08.112067  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:10.114136  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:12.614813  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:15.114911  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:17.613235  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:19.615069  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:22.112490  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:24.113931  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:26.612939  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:28.614150  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:31.114020  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:33.617512  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:35.621341  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:38.113791  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:40.612566  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:42.613649  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:45.112527  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:47.613287  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:50.112295  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:52.613335  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:54.613841  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:57.112972  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:59.113212  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:01.119015  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:03.613240  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:06.113510  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:08.612687  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:10.613660  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:13.112583  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:15.615178  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:18.112755  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:20.112926  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:22.113224  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:24.612860  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:26.613550  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:29.112197  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:31.613704  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:34.114073  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:36.613136  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:38.613720  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:41.113190  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:43.613305  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:45.614221  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:48.112358  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:50.114916  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:52.612994  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:54.613846  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:57.113493  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:59.613114  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:01.613502  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:03.614276  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:06.113397  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:08.613516  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:10.613838  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:13.112643  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:15.113094  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:17.611773  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:19.612923  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:21.613915  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:24.115614  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:26.613303  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:29.112954  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:31.613362  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:33.613747  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:35.614095  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:38.113248  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:40.113409  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:42.612479  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:44.612720  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:47.113541  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:49.113724  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:51.613977  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:53.614024  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:56.114005  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:58.115005  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:00.613284  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:02.613392  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:04.613875  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:06.618352  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:09.113660  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:11.613942  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:14.113721  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:16.613032  528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:17.606211  528954 pod_ready.go:82] duration metric: took 4m0.000478536s for pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace to be "Ready" ...
	E0127 13:28:17.606244  528954 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 13:28:17.606268  528954 pod_ready.go:39] duration metric: took 4m14.643501676s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:28:17.606320  528954 kubeadm.go:597] duration metric: took 4m22.172013871s to restartPrimaryControlPlane
	W0127 13:28:17.606408  528954 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 13:28:17.606449  528954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 13:28:19.440328  528954 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.83384135s)
	I0127 13:28:19.440434  528954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:28:19.457247  528954 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 13:28:19.468454  528954 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:28:19.479090  528954 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:28:19.479120  528954 kubeadm.go:157] found existing configuration files:
	
	I0127 13:28:19.479176  528954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:28:19.489428  528954 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:28:19.489513  528954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:28:19.500168  528954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:28:19.513940  528954 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:28:19.514000  528954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:28:19.526564  528954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:28:19.536966  528954 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:28:19.537051  528954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:28:19.547626  528954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:28:19.557566  528954 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:28:19.557652  528954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:28:19.568536  528954 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 13:28:19.733134  528954 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 13:28:29.507095  528954 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 13:28:29.507181  528954 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 13:28:29.507303  528954 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 13:28:29.507433  528954 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 13:28:29.507569  528954 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 13:28:29.507651  528954 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 13:28:29.555822  528954 out.go:235]   - Generating certificates and keys ...
	I0127 13:28:29.555980  528954 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 13:28:29.556057  528954 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 13:28:29.556164  528954 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 13:28:29.556257  528954 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 13:28:29.556362  528954 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 13:28:29.556450  528954 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 13:28:29.556534  528954 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 13:28:29.556621  528954 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 13:28:29.556725  528954 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 13:28:29.556836  528954 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 13:28:29.556899  528954 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 13:28:29.556989  528954 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 13:28:29.557062  528954 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 13:28:29.557154  528954 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 13:28:29.557231  528954 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 13:28:29.557321  528954 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 13:28:29.557467  528954 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 13:28:29.557589  528954 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 13:28:29.557650  528954 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 13:28:29.559497  528954 out.go:235]   - Booting up control plane ...
	I0127 13:28:29.559615  528954 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 13:28:29.559733  528954 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 13:28:29.559822  528954 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 13:28:29.559954  528954 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 13:28:29.560102  528954 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 13:28:29.560178  528954 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 13:28:29.560313  528954 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 13:28:29.560450  528954 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 13:28:29.560525  528954 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 506.686794ms
	I0127 13:28:29.560617  528954 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 13:28:29.560673  528954 kubeadm.go:310] [api-check] The API server is healthy after 6.003038304s
	I0127 13:28:29.560795  528954 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 13:28:29.560965  528954 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 13:28:29.561040  528954 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 13:28:29.561242  528954 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-325431 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 13:28:29.561318  528954 kubeadm.go:310] [bootstrap-token] Using token: ec8dk3.k4ocr1751q2as6lm
	I0127 13:28:29.563363  528954 out.go:235]   - Configuring RBAC rules ...
	I0127 13:28:29.563514  528954 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 13:28:29.563634  528954 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 13:28:29.563884  528954 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 13:28:29.564032  528954 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 13:28:29.564184  528954 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 13:28:29.564302  528954 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 13:28:29.564447  528954 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 13:28:29.564512  528954 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 13:28:29.564552  528954 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 13:28:29.564556  528954 kubeadm.go:310] 
	I0127 13:28:29.564605  528954 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 13:28:29.564608  528954 kubeadm.go:310] 
	I0127 13:28:29.564675  528954 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 13:28:29.564678  528954 kubeadm.go:310] 
	I0127 13:28:29.564700  528954 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 13:28:29.564747  528954 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 13:28:29.564792  528954 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 13:28:29.564795  528954 kubeadm.go:310] 
	I0127 13:28:29.564866  528954 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 13:28:29.564872  528954 kubeadm.go:310] 
	I0127 13:28:29.564922  528954 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 13:28:29.564926  528954 kubeadm.go:310] 
	I0127 13:28:29.564991  528954 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 13:28:29.565074  528954 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 13:28:29.565163  528954 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 13:28:29.565168  528954 kubeadm.go:310] 
	I0127 13:28:29.565262  528954 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 13:28:29.565346  528954 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 13:28:29.565350  528954 kubeadm.go:310] 
	I0127 13:28:29.565421  528954 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ec8dk3.k4ocr1751q2as6lm \
	I0127 13:28:29.565504  528954 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:44e7ea386d1f8e7ab1336d835156dd84ecca20069390afc88f04bb1a3c629fd2 \
	I0127 13:28:29.565528  528954 kubeadm.go:310] 	--control-plane 
	I0127 13:28:29.565534  528954 kubeadm.go:310] 
	I0127 13:28:29.565640  528954 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 13:28:29.565647  528954 kubeadm.go:310] 
	I0127 13:28:29.565721  528954 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ec8dk3.k4ocr1751q2as6lm \
	I0127 13:28:29.565880  528954 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:44e7ea386d1f8e7ab1336d835156dd84ecca20069390afc88f04bb1a3c629fd2 
	I0127 13:28:29.565896  528954 cni.go:84] Creating CNI manager for ""
	I0127 13:28:29.565905  528954 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 13:28:29.571921  528954 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 13:28:29.573671  528954 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 13:28:29.600549  528954 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 13:28:29.632214  528954 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 13:28:29.632318  528954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:28:29.632503  528954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-325431 minikube.k8s.io/updated_at=2025_01_27T13_28_29_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=0d71ce9b1959d04f0d9fa7dbc5639a49619ad89b minikube.k8s.io/name=no-preload-325431 minikube.k8s.io/primary=true
	I0127 13:28:29.658309  528954 ops.go:34] apiserver oom_adj: -16
	I0127 13:28:30.154694  528954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:28:30.655330  528954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:28:31.154961  528954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:28:31.654793  528954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:28:32.155389  528954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:28:32.655001  528954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:28:32.770122  528954 kubeadm.go:1113] duration metric: took 3.137876229s to wait for elevateKubeSystemPrivileges
	I0127 13:28:32.770176  528954 kubeadm.go:394] duration metric: took 4m37.401187954s to StartCluster
	I0127 13:28:32.770204  528954 settings.go:142] acquiring lock: {Name:mk070ebf22d35da2704f00750921836dbd2cd121 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:28:32.770307  528954 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20317-466901/kubeconfig
	I0127 13:28:32.771338  528954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-466901/kubeconfig: {Name:mkc116eec378af43ea8fefe45e11af3e19be85bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:28:32.771619  528954 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.116 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 13:28:32.771757  528954 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 13:28:32.771867  528954 config.go:182] Loaded profile config "no-preload-325431": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 13:28:32.771877  528954 addons.go:69] Setting storage-provisioner=true in profile "no-preload-325431"
	I0127 13:28:32.771896  528954 addons.go:238] Setting addon storage-provisioner=true in "no-preload-325431"
	W0127 13:28:32.771912  528954 addons.go:247] addon storage-provisioner should already be in state true
	I0127 13:28:32.771924  528954 addons.go:69] Setting metrics-server=true in profile "no-preload-325431"
	I0127 13:28:32.771940  528954 addons.go:238] Setting addon metrics-server=true in "no-preload-325431"
	I0127 13:28:32.771948  528954 host.go:66] Checking if "no-preload-325431" exists ...
	I0127 13:28:32.771951  528954 addons.go:69] Setting dashboard=true in profile "no-preload-325431"
	I0127 13:28:32.771971  528954 addons.go:238] Setting addon dashboard=true in "no-preload-325431"
	W0127 13:28:32.771985  528954 addons.go:247] addon dashboard should already be in state true
	I0127 13:28:32.772026  528954 host.go:66] Checking if "no-preload-325431" exists ...
	I0127 13:28:32.772339  528954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:28:32.772381  528954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:28:32.772444  528954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:28:32.772491  528954 main.go:141] libmachine: Launching plugin server for driver kvm2
	W0127 13:28:32.771954  528954 addons.go:247] addon metrics-server should already be in state true
	I0127 13:28:32.771931  528954 addons.go:69] Setting default-storageclass=true in profile "no-preload-325431"
	I0127 13:28:32.772561  528954 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-325431"
	I0127 13:28:32.772704  528954 host.go:66] Checking if "no-preload-325431" exists ...
	I0127 13:28:32.773018  528954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:28:32.773059  528954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:28:32.773063  528954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:28:32.773106  528954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:28:32.773684  528954 out.go:177] * Verifying Kubernetes components...
	I0127 13:28:32.775484  528954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:28:32.791534  528954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32981
	I0127 13:28:32.792145  528954 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:28:32.792826  528954 main.go:141] libmachine: Using API Version  1
	I0127 13:28:32.792857  528954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:28:32.792949  528954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37149
	I0127 13:28:32.792988  528954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37917
	I0127 13:28:32.793322  528954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34337
	I0127 13:28:32.793488  528954 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:28:32.793579  528954 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:28:32.793653  528954 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:28:32.793708  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetState
	I0127 13:28:32.793967  528954 main.go:141] libmachine: Using API Version  1
	I0127 13:28:32.793989  528954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:28:32.794127  528954 main.go:141] libmachine: Using API Version  1
	I0127 13:28:32.794144  528954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:28:32.794498  528954 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:28:32.794531  528954 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:28:32.794779  528954 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:28:32.795535  528954 main.go:141] libmachine: Using API Version  1
	I0127 13:28:32.795556  528954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:28:32.795851  528954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:28:32.795888  528954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:28:32.797017  528954 addons.go:238] Setting addon default-storageclass=true in "no-preload-325431"
	W0127 13:28:32.797035  528954 addons.go:247] addon default-storageclass should already be in state true
	I0127 13:28:32.797068  528954 host.go:66] Checking if "no-preload-325431" exists ...
	I0127 13:28:32.797418  528954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:28:32.797453  528954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:28:32.797741  528954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:28:32.797777  528954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:28:32.797977  528954 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:28:32.798620  528954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:28:32.798660  528954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:28:32.817426  528954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35197
	I0127 13:28:32.817901  528954 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:28:32.818380  528954 main.go:141] libmachine: Using API Version  1
	I0127 13:28:32.818399  528954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:28:32.818715  528954 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:28:32.818907  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetState
	I0127 13:28:32.821099  528954 main.go:141] libmachine: (no-preload-325431) Calling .DriverName
	I0127 13:28:32.821782  528954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38249
	I0127 13:28:32.822281  528954 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:28:32.822811  528954 main.go:141] libmachine: Using API Version  1
	I0127 13:28:32.822835  528954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:28:32.823252  528954 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:28:32.823879  528954 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 13:28:32.825375  528954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42361
	I0127 13:28:32.825970  528954 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:28:32.826674  528954 main.go:141] libmachine: Using API Version  1
	I0127 13:28:32.826699  528954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:28:32.827070  528954 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:28:32.827808  528954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:28:32.827868  528954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:28:32.828111  528954 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 13:28:32.828544  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetState
	I0127 13:28:32.829570  528954 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 13:28:32.829601  528954 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 13:28:32.829627  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHHostname
	I0127 13:28:32.831338  528954 main.go:141] libmachine: (no-preload-325431) Calling .DriverName
	I0127 13:28:32.834827  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:28:32.835365  528954 main.go:141] libmachine: (no-preload-325431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:73:1e", ip: ""} in network mk-no-preload-325431: {Iface:virbr2 ExpiryTime:2025-01-27 14:23:44 +0000 UTC Type:0 Mac:52:54:00:0d:73:1e Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:no-preload-325431 Clientid:01:52:54:00:0d:73:1e}
	I0127 13:28:32.835387  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined IP address 192.168.50.116 and MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:28:32.835758  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHPort
	I0127 13:28:32.835988  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHKeyPath
	I0127 13:28:32.836173  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHUsername
	I0127 13:28:32.836364  528954 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/no-preload-325431/id_rsa Username:docker}
	I0127 13:28:32.837086  528954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43943
	I0127 13:28:32.837418  528954 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 13:28:32.837500  528954 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:28:32.838122  528954 main.go:141] libmachine: Using API Version  1
	I0127 13:28:32.838148  528954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:28:32.838640  528954 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:28:32.838813  528954 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 13:28:32.838830  528954 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 13:28:32.838853  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHHostname
	I0127 13:28:32.838871  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetState
	I0127 13:28:32.841521  528954 main.go:141] libmachine: (no-preload-325431) Calling .DriverName
	I0127 13:28:32.843361  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:28:32.843995  528954 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:28:32.844249  528954 main.go:141] libmachine: (no-preload-325431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:73:1e", ip: ""} in network mk-no-preload-325431: {Iface:virbr2 ExpiryTime:2025-01-27 14:23:44 +0000 UTC Type:0 Mac:52:54:00:0d:73:1e Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:no-preload-325431 Clientid:01:52:54:00:0d:73:1e}
	I0127 13:28:32.844286  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined IP address 192.168.50.116 and MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:28:32.844647  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHPort
	I0127 13:28:32.844886  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHKeyPath
	I0127 13:28:32.845200  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHUsername
	I0127 13:28:32.845386  528954 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/no-preload-325431/id_rsa Username:docker}
	I0127 13:28:32.845938  528954 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:28:32.845958  528954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 13:28:32.845976  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHHostname
	I0127 13:28:32.848694  528954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42355
	I0127 13:28:32.849174  528954 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:28:32.849648  528954 main.go:141] libmachine: Using API Version  1
	I0127 13:28:32.849668  528954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:28:32.849887  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:28:32.850116  528954 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:28:32.850322  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetState
	I0127 13:28:32.850423  528954 main.go:141] libmachine: (no-preload-325431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:73:1e", ip: ""} in network mk-no-preload-325431: {Iface:virbr2 ExpiryTime:2025-01-27 14:23:44 +0000 UTC Type:0 Mac:52:54:00:0d:73:1e Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:no-preload-325431 Clientid:01:52:54:00:0d:73:1e}
	I0127 13:28:32.850486  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined IP address 192.168.50.116 and MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:28:32.850698  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHPort
	I0127 13:28:32.850901  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHKeyPath
	I0127 13:28:32.851130  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHUsername
	I0127 13:28:32.851341  528954 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/no-preload-325431/id_rsa Username:docker}
	I0127 13:28:32.852026  528954 main.go:141] libmachine: (no-preload-325431) Calling .DriverName
	I0127 13:28:32.852266  528954 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 13:28:32.852280  528954 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 13:28:32.852294  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHHostname
	I0127 13:28:32.855632  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:28:32.856244  528954 main.go:141] libmachine: (no-preload-325431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:73:1e", ip: ""} in network mk-no-preload-325431: {Iface:virbr2 ExpiryTime:2025-01-27 14:23:44 +0000 UTC Type:0 Mac:52:54:00:0d:73:1e Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:no-preload-325431 Clientid:01:52:54:00:0d:73:1e}
	I0127 13:28:32.856261  528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined IP address 192.168.50.116 and MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
	I0127 13:28:32.856511  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHPort
	I0127 13:28:32.856742  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHKeyPath
	I0127 13:28:32.856887  528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHUsername
	I0127 13:28:32.857019  528954 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/no-preload-325431/id_rsa Username:docker}
	I0127 13:28:33.006015  528954 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:28:33.027005  528954 node_ready.go:35] waiting up to 6m0s for node "no-preload-325431" to be "Ready" ...
	I0127 13:28:33.066405  528954 node_ready.go:49] node "no-preload-325431" has status "Ready":"True"
	I0127 13:28:33.066442  528954 node_ready.go:38] duration metric: took 39.39561ms for node "no-preload-325431" to be "Ready" ...
	I0127 13:28:33.066457  528954 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:28:33.104507  528954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:28:33.115586  528954 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-325431" in "kube-system" namespace to be "Ready" ...
	I0127 13:28:33.198966  528954 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 13:28:33.199005  528954 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 13:28:33.252334  528954 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 13:28:33.252374  528954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 13:28:33.252518  528954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 13:28:33.268119  528954 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 13:28:33.268153  528954 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 13:28:33.353884  528954 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 13:28:33.353918  528954 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 13:28:33.363468  528954 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 13:28:33.363509  528954 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 13:28:33.429294  528954 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:28:33.429332  528954 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 13:28:33.469451  528954 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 13:28:33.469488  528954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 13:28:33.516000  528954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:28:33.609014  528954 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 13:28:33.609050  528954 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 13:28:33.663870  528954 pod_ready.go:93] pod "etcd-no-preload-325431" in "kube-system" namespace has status "Ready":"True"
	I0127 13:28:33.663902  528954 pod_ready.go:82] duration metric: took 548.28046ms for pod "etcd-no-preload-325431" in "kube-system" namespace to be "Ready" ...
	I0127 13:28:33.663918  528954 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-325431" in "kube-system" namespace to be "Ready" ...
	I0127 13:28:33.743380  528954 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 13:28:33.743415  528954 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 13:28:33.906899  528954 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 13:28:33.906931  528954 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 13:28:33.989880  528954 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 13:28:33.989985  528954 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 13:28:34.084465  528954 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:28:34.084497  528954 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 13:28:34.157593  528954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:28:34.559022  528954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.454458733s)
	I0127 13:28:34.559092  528954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.306537266s)
	I0127 13:28:34.559153  528954 main.go:141] libmachine: Making call to close driver server
	I0127 13:28:34.559099  528954 main.go:141] libmachine: Making call to close driver server
	I0127 13:28:34.559215  528954 main.go:141] libmachine: (no-preload-325431) Calling .Close
	I0127 13:28:34.559175  528954 main.go:141] libmachine: (no-preload-325431) Calling .Close
	I0127 13:28:34.559617  528954 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:28:34.559636  528954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:28:34.559652  528954 main.go:141] libmachine: Making call to close driver server
	I0127 13:28:34.559661  528954 main.go:141] libmachine: (no-preload-325431) Calling .Close
	I0127 13:28:34.559760  528954 main.go:141] libmachine: (no-preload-325431) DBG | Closing plugin on server side
	I0127 13:28:34.559812  528954 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:28:34.559830  528954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:28:34.559842  528954 main.go:141] libmachine: Making call to close driver server
	I0127 13:28:34.559875  528954 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:28:34.559893  528954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:28:34.559951  528954 main.go:141] libmachine: (no-preload-325431) DBG | Closing plugin on server side
	I0127 13:28:34.559880  528954 main.go:141] libmachine: (no-preload-325431) Calling .Close
	I0127 13:28:34.560364  528954 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:28:34.560386  528954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:28:34.587657  528954 main.go:141] libmachine: Making call to close driver server
	I0127 13:28:34.587694  528954 main.go:141] libmachine: (no-preload-325431) Calling .Close
	I0127 13:28:34.588235  528954 main.go:141] libmachine: (no-preload-325431) DBG | Closing plugin on server side
	I0127 13:28:34.588257  528954 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:28:34.588306  528954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:28:35.333995  528954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.817938304s)
	I0127 13:28:35.334057  528954 main.go:141] libmachine: Making call to close driver server
	I0127 13:28:35.334071  528954 main.go:141] libmachine: (no-preload-325431) Calling .Close
	I0127 13:28:35.334464  528954 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:28:35.334497  528954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:28:35.334508  528954 main.go:141] libmachine: Making call to close driver server
	I0127 13:28:35.334516  528954 main.go:141] libmachine: (no-preload-325431) Calling .Close
	I0127 13:28:35.334790  528954 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:28:35.334814  528954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:28:35.334827  528954 addons.go:479] Verifying addon metrics-server=true in "no-preload-325431"
	I0127 13:28:35.686543  528954 pod_ready.go:103] pod "kube-apiserver-no-preload-325431" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:36.551697  528954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.394050922s)
	I0127 13:28:36.551766  528954 main.go:141] libmachine: Making call to close driver server
	I0127 13:28:36.551778  528954 main.go:141] libmachine: (no-preload-325431) Calling .Close
	I0127 13:28:36.552197  528954 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:28:36.552291  528954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:28:36.552336  528954 main.go:141] libmachine: Making call to close driver server
	I0127 13:28:36.552379  528954 main.go:141] libmachine: (no-preload-325431) Calling .Close
	I0127 13:28:36.552264  528954 main.go:141] libmachine: (no-preload-325431) DBG | Closing plugin on server side
	I0127 13:28:36.554273  528954 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:28:36.554297  528954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:28:36.554277  528954 main.go:141] libmachine: (no-preload-325431) DBG | Closing plugin on server side
	I0127 13:28:36.556095  528954 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-325431 addons enable metrics-server
	
	I0127 13:28:36.557682  528954 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 13:28:36.559221  528954 addons.go:514] duration metric: took 3.787479018s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 13:28:38.171680  528954 pod_ready.go:103] pod "kube-apiserver-no-preload-325431" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:40.671375  528954 pod_ready.go:103] pod "kube-apiserver-no-preload-325431" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:41.171716  528954 pod_ready.go:93] pod "kube-apiserver-no-preload-325431" in "kube-system" namespace has status "Ready":"True"
	I0127 13:28:41.171759  528954 pod_ready.go:82] duration metric: took 7.507831849s for pod "kube-apiserver-no-preload-325431" in "kube-system" namespace to be "Ready" ...
	I0127 13:28:41.171776  528954 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-325431" in "kube-system" namespace to be "Ready" ...
	I0127 13:28:41.177006  528954 pod_ready.go:93] pod "kube-controller-manager-no-preload-325431" in "kube-system" namespace has status "Ready":"True"
	I0127 13:28:41.177037  528954 pod_ready.go:82] duration metric: took 5.251769ms for pod "kube-controller-manager-no-preload-325431" in "kube-system" namespace to be "Ready" ...
	I0127 13:28:41.177051  528954 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-325431" in "kube-system" namespace to be "Ready" ...
	I0127 13:28:41.185589  528954 pod_ready.go:93] pod "kube-scheduler-no-preload-325431" in "kube-system" namespace has status "Ready":"True"
	I0127 13:28:41.185623  528954 pod_ready.go:82] duration metric: took 8.562889ms for pod "kube-scheduler-no-preload-325431" in "kube-system" namespace to be "Ready" ...
	I0127 13:28:41.185635  528954 pod_ready.go:39] duration metric: took 8.119162889s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:28:41.185667  528954 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:28:41.185750  528954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:28:41.211566  528954 api_server.go:72] duration metric: took 8.439896874s to wait for apiserver process to appear ...
	I0127 13:28:41.211674  528954 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:28:41.211718  528954 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I0127 13:28:41.218905  528954 api_server.go:279] https://192.168.50.116:8443/healthz returned 200:
	ok
	I0127 13:28:41.221906  528954 api_server.go:141] control plane version: v1.32.1
	I0127 13:28:41.221942  528954 api_server.go:131] duration metric: took 10.24564ms to wait for apiserver health ...
	I0127 13:28:41.221954  528954 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:28:41.237725  528954 system_pods.go:59] 9 kube-system pods found
	I0127 13:28:41.237853  528954 system_pods.go:61] "coredns-668d6bf9bc-4qzkt" [07cf0c66-5805-4c95-81d5-88276ae8634f] Running
	I0127 13:28:41.237881  528954 system_pods.go:61] "coredns-668d6bf9bc-hpb7s" [73baecfb-5361-4d5f-b11d-a8b361f28fb8] Running
	I0127 13:28:41.237910  528954 system_pods.go:61] "etcd-no-preload-325431" [7b6f6b5c-6e2d-425b-9311-565ea323e42d] Running
	I0127 13:28:41.237933  528954 system_pods.go:61] "kube-apiserver-no-preload-325431" [edbe877d-de59-41e4-9bc4-0f11b4b191aa] Running
	I0127 13:28:41.237956  528954 system_pods.go:61] "kube-controller-manager-no-preload-325431" [01168381-3ea7-4439-8ba7-d31dbee82a05] Running
	I0127 13:28:41.237971  528954 system_pods.go:61] "kube-proxy-sxztd" [b2ce07c8-7354-4a9d-87a4-af9c46bf3ad3] Running
	I0127 13:28:41.237985  528954 system_pods.go:61] "kube-scheduler-no-preload-325431" [b20fc6de-09d5-4db0-a1b2-d20570df69b1] Running
	I0127 13:28:41.238019  528954 system_pods.go:61] "metrics-server-f79f97bbb-z7vjh" [f904e246-cad3-4c86-8a01-f8eea49bf563] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 13:28:41.238035  528954 system_pods.go:61] "storage-provisioner" [241c0e33-1145-46f6-abbe-f7e75ada3578] Running
	I0127 13:28:41.238058  528954 system_pods.go:74] duration metric: took 16.0946ms to wait for pod list to return data ...
	I0127 13:28:41.238100  528954 default_sa.go:34] waiting for default service account to be created ...
	I0127 13:28:41.242966  528954 default_sa.go:45] found service account: "default"
	I0127 13:28:41.242995  528954 default_sa.go:55] duration metric: took 4.876772ms for default service account to be created ...
	I0127 13:28:41.243009  528954 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 13:28:41.250843  528954 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p no-preload-325431 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-325431 -n no-preload-325431
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-325431 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-325431 logs -n 25: (1.487259668s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-116657        | old-k8s-version-116657       | jenkins | v1.35.0 | 27 Jan 25 13:23 UTC | 27 Jan 25 13:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-116657                              | old-k8s-version-116657       | jenkins | v1.35.0 | 27 Jan 25 13:23 UTC | 27 Jan 25 13:24 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-325431                  | no-preload-325431            | jenkins | v1.35.0 | 27 Jan 25 13:23 UTC | 27 Jan 25 13:23 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-325431                                   | no-preload-325431            | jenkins | v1.35.0 | 27 Jan 25 13:23 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-766944                 | embed-certs-766944           | jenkins | v1.35.0 | 27 Jan 25 13:24 UTC | 27 Jan 25 13:24 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-766944                                  | embed-certs-766944           | jenkins | v1.35.0 | 27 Jan 25 13:24 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-325510       | default-k8s-diff-port-325510 | jenkins | v1.35.0 | 27 Jan 25 13:24 UTC | 27 Jan 25 13:24 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-325510 | jenkins | v1.35.0 | 27 Jan 25 13:24 UTC |                     |
	|         | default-k8s-diff-port-325510                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-116657             | old-k8s-version-116657       | jenkins | v1.35.0 | 27 Jan 25 13:24 UTC | 27 Jan 25 13:24 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-116657                              | old-k8s-version-116657       | jenkins | v1.35.0 | 27 Jan 25 13:24 UTC | 27 Jan 25 13:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | old-k8s-version-116657 image                           | old-k8s-version-116657       | jenkins | v1.35.0 | 27 Jan 25 13:27 UTC | 27 Jan 25 13:27 UTC |
	|         | list --format=json                                     |                              |         |         |                     |                     |
	| pause   | -p old-k8s-version-116657                              | old-k8s-version-116657       | jenkins | v1.35.0 | 27 Jan 25 13:27 UTC | 27 Jan 25 13:27 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p old-k8s-version-116657                              | old-k8s-version-116657       | jenkins | v1.35.0 | 27 Jan 25 13:27 UTC | 27 Jan 25 13:27 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-116657                              | old-k8s-version-116657       | jenkins | v1.35.0 | 27 Jan 25 13:27 UTC | 27 Jan 25 13:27 UTC |
	| delete  | -p old-k8s-version-116657                              | old-k8s-version-116657       | jenkins | v1.35.0 | 27 Jan 25 13:27 UTC | 27 Jan 25 13:27 UTC |
	| start   | -p newest-cni-296225 --memory=2200 --alsologtostderr   | newest-cni-296225            | jenkins | v1.35.0 | 27 Jan 25 13:27 UTC | 27 Jan 25 13:28 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=containerd          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-296225             | newest-cni-296225            | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC | 27 Jan 25 13:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-296225                                   | newest-cni-296225            | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC | 27 Jan 25 13:28 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-296225                  | newest-cni-296225            | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC | 27 Jan 25 13:28 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-296225 --memory=2200 --alsologtostderr   | newest-cni-296225            | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC | 27 Jan 25 13:29 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=containerd          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-296225 image list                           | newest-cni-296225            | jenkins | v1.35.0 | 27 Jan 25 13:29 UTC | 27 Jan 25 13:29 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-296225                                   | newest-cni-296225            | jenkins | v1.35.0 | 27 Jan 25 13:29 UTC | 27 Jan 25 13:29 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-296225                                   | newest-cni-296225            | jenkins | v1.35.0 | 27 Jan 25 13:29 UTC | 27 Jan 25 13:29 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-296225                                   | newest-cni-296225            | jenkins | v1.35.0 | 27 Jan 25 13:29 UTC | 27 Jan 25 13:29 UTC |
	| delete  | -p newest-cni-296225                                   | newest-cni-296225            | jenkins | v1.35.0 | 27 Jan 25 13:29 UTC | 27 Jan 25 13:29 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 13:28:56
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 13:28:56.167206  531586 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:28:56.167420  531586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:28:56.167436  531586 out.go:358] Setting ErrFile to fd 2...
	I0127 13:28:56.167442  531586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:28:56.167737  531586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-466901/.minikube/bin
	I0127 13:28:56.168827  531586 out.go:352] Setting JSON to false
	I0127 13:28:56.169977  531586 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":36633,"bootTime":1737947903,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 13:28:56.170093  531586 start.go:139] virtualization: kvm guest
	I0127 13:28:56.172461  531586 out.go:177] * [newest-cni-296225] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 13:28:56.174020  531586 notify.go:220] Checking for updates...
	I0127 13:28:56.174033  531586 out.go:177]   - MINIKUBE_LOCATION=20317
	I0127 13:28:56.175512  531586 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:28:56.176838  531586 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20317-466901/kubeconfig
	I0127 13:28:56.178184  531586 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-466901/.minikube
	I0127 13:28:56.179518  531586 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 13:28:56.180891  531586 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 13:28:56.182708  531586 config.go:182] Loaded profile config "newest-cni-296225": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 13:28:56.183131  531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:28:56.183194  531586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:28:56.200308  531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43915
	I0127 13:28:56.201060  531586 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:28:56.201765  531586 main.go:141] libmachine: Using API Version  1
	I0127 13:28:56.201797  531586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:28:56.202181  531586 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:28:56.202408  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:28:56.202728  531586 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 13:28:56.203250  531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:28:56.203319  531586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:28:56.220011  531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33981
	I0127 13:28:56.220435  531586 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:28:56.220978  531586 main.go:141] libmachine: Using API Version  1
	I0127 13:28:56.221006  531586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:28:56.221409  531586 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:28:56.221606  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:28:56.258580  531586 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 13:28:56.260066  531586 start.go:297] selected driver: kvm2
	I0127 13:28:56.260097  531586 start.go:901] validating driver "kvm2" against &{Name:newest-cni-296225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-296225 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.46 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenA
ddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:28:56.260225  531586 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 13:28:56.260938  531586 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:28:56.261024  531586 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20317-466901/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 13:28:56.277111  531586 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 13:28:56.277523  531586 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 13:28:56.277560  531586 cni.go:84] Creating CNI manager for ""
	I0127 13:28:56.277605  531586 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 13:28:56.277639  531586 start.go:340] cluster config:
	{Name:newest-cni-296225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-296225 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.46 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:28:56.277740  531586 iso.go:125] acquiring lock: {Name:mkcc3db98c9d4661e75c49bd9b203d0232dff8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:28:56.280361  531586 out.go:177] * Starting "newest-cni-296225" primary control-plane node in "newest-cni-296225" cluster
	I0127 13:28:56.281606  531586 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 13:28:56.281678  531586 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20317-466901/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
	I0127 13:28:56.281692  531586 cache.go:56] Caching tarball of preloaded images
	I0127 13:28:56.281783  531586 preload.go:172] Found /home/jenkins/minikube-integration/20317-466901/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 13:28:56.281796  531586 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
	I0127 13:28:56.281935  531586 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225/config.json ...
	I0127 13:28:56.282191  531586 start.go:360] acquireMachinesLock for newest-cni-296225: {Name:mke115b779db52cb0a5f0a05f83d5bad0a35c561 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 13:28:56.282273  531586 start.go:364] duration metric: took 45.538µs to acquireMachinesLock for "newest-cni-296225"
	I0127 13:28:56.282297  531586 start.go:96] Skipping create...Using existing machine configuration
	I0127 13:28:56.282306  531586 fix.go:54] fixHost starting: 
	I0127 13:28:56.282589  531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:28:56.282621  531586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:28:56.298876  531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38851
	I0127 13:28:56.299391  531586 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:28:56.299946  531586 main.go:141] libmachine: Using API Version  1
	I0127 13:28:56.299975  531586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:28:56.300339  531586 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:28:56.300605  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:28:56.300813  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetState
	I0127 13:28:56.302631  531586 fix.go:112] recreateIfNeeded on newest-cni-296225: state=Stopped err=<nil>
	I0127 13:28:56.302659  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	W0127 13:28:56.302822  531586 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 13:28:56.304762  531586 out.go:177] * Restarting existing kvm2 VM for "newest-cni-296225" ...
	I0127 13:28:53.806392  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:55.806518  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:57.808012  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:55.406991  529251 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.84407049s)
	I0127 13:28:55.407062  529251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:28:55.426120  529251 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 13:28:55.438195  529251 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:28:55.457399  529251 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:28:55.457425  529251 kubeadm.go:157] found existing configuration files:
	
	I0127 13:28:55.457485  529251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:28:55.469544  529251 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:28:55.469611  529251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:28:55.481065  529251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:28:55.492868  529251 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:28:55.492928  529251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:28:55.505930  529251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:28:55.517268  529251 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:28:55.517332  529251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:28:55.528681  529251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:28:55.539678  529251 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:28:55.539755  529251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:28:55.550987  529251 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 13:28:55.719870  529251 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 13:28:56.306046  531586 main.go:141] libmachine: (newest-cni-296225) Calling .Start
	I0127 13:28:56.306254  531586 main.go:141] libmachine: (newest-cni-296225) starting domain...
	I0127 13:28:56.306277  531586 main.go:141] libmachine: (newest-cni-296225) ensuring networks are active...
	I0127 13:28:56.307157  531586 main.go:141] libmachine: (newest-cni-296225) Ensuring network default is active
	I0127 13:28:56.307587  531586 main.go:141] libmachine: (newest-cni-296225) Ensuring network mk-newest-cni-296225 is active
	I0127 13:28:56.307960  531586 main.go:141] libmachine: (newest-cni-296225) getting domain XML...
	I0127 13:28:56.308646  531586 main.go:141] libmachine: (newest-cni-296225) creating domain...
	I0127 13:28:57.604425  531586 main.go:141] libmachine: (newest-cni-296225) waiting for IP...
	I0127 13:28:57.605479  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:28:57.606123  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:28:57.606254  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:28:57.606079  531622 retry.go:31] will retry after 235.333873ms: waiting for domain to come up
	I0127 13:28:57.843349  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:28:57.843843  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:28:57.843877  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:28:57.843796  531622 retry.go:31] will retry after 261.244379ms: waiting for domain to come up
	I0127 13:28:58.107236  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:28:58.107847  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:28:58.107885  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:28:58.107815  531622 retry.go:31] will retry after 367.467141ms: waiting for domain to come up
	I0127 13:28:58.477662  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:28:58.478416  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:28:58.478454  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:28:58.478385  531622 retry.go:31] will retry after 466.451127ms: waiting for domain to come up
	I0127 13:28:58.946239  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:28:58.946809  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:28:58.946854  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:28:58.946766  531622 retry.go:31] will retry after 559.614953ms: waiting for domain to come up
	I0127 13:28:59.507817  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:28:59.508251  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:28:59.508317  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:28:59.508231  531622 retry.go:31] will retry after 651.013274ms: waiting for domain to come up
	I0127 13:29:00.161338  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:00.161916  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:29:00.161944  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:29:00.161879  531622 retry.go:31] will retry after 780.526485ms: waiting for domain to come up
	I0127 13:29:00.944251  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:00.944845  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:29:00.944875  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:29:00.944817  531622 retry.go:31] will retry after 1.304098s: waiting for domain to come up
	I0127 13:28:59.808090  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:01.808480  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:04.273698  529251 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 13:29:04.273779  529251 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 13:29:04.273879  529251 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 13:29:04.274011  529251 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 13:29:04.274137  529251 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 13:29:04.274229  529251 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 13:29:04.275837  529251 out.go:235]   - Generating certificates and keys ...
	I0127 13:29:04.275953  529251 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 13:29:04.276042  529251 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 13:29:04.276162  529251 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 13:29:04.276253  529251 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 13:29:04.276359  529251 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 13:29:04.276440  529251 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 13:29:04.276535  529251 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 13:29:04.276675  529251 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 13:29:04.276764  529251 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 13:29:04.276906  529251 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 13:29:04.276967  529251 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 13:29:04.277065  529251 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 13:29:04.277113  529251 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 13:29:04.277186  529251 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 13:29:04.277274  529251 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 13:29:04.277381  529251 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 13:29:04.277460  529251 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 13:29:04.277559  529251 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 13:29:04.277647  529251 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 13:29:04.280280  529251 out.go:235]   - Booting up control plane ...
	I0127 13:29:04.280412  529251 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 13:29:04.280494  529251 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 13:29:04.280588  529251 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 13:29:04.280708  529251 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 13:29:04.280854  529251 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 13:29:04.280919  529251 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 13:29:04.281101  529251 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 13:29:04.281252  529251 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 13:29:04.281343  529251 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002900104s
	I0127 13:29:04.281472  529251 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 13:29:04.281557  529251 kubeadm.go:310] [api-check] The API server is healthy after 5.001737119s
	I0127 13:29:04.281687  529251 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 13:29:04.281880  529251 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 13:29:04.281947  529251 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 13:29:04.282181  529251 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-766944 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 13:29:04.282286  529251 kubeadm.go:310] [bootstrap-token] Using token: cubj1b.pwpdo0hgbjp08kat
	I0127 13:29:04.283697  529251 out.go:235]   - Configuring RBAC rules ...
	I0127 13:29:04.283851  529251 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 13:29:04.283970  529251 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 13:29:04.284120  529251 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 13:29:04.284293  529251 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 13:29:04.284399  529251 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 13:29:04.284473  529251 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 13:29:04.284576  529251 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 13:29:04.284615  529251 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 13:29:04.284679  529251 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 13:29:04.284689  529251 kubeadm.go:310] 
	I0127 13:29:04.284780  529251 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 13:29:04.284794  529251 kubeadm.go:310] 
	I0127 13:29:04.284891  529251 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 13:29:04.284900  529251 kubeadm.go:310] 
	I0127 13:29:04.284950  529251 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 13:29:04.285047  529251 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 13:29:04.285134  529251 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 13:29:04.285146  529251 kubeadm.go:310] 
	I0127 13:29:04.285267  529251 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 13:29:04.285279  529251 kubeadm.go:310] 
	I0127 13:29:04.285341  529251 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 13:29:04.285356  529251 kubeadm.go:310] 
	I0127 13:29:04.285410  529251 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 13:29:04.285478  529251 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 13:29:04.285536  529251 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 13:29:04.285542  529251 kubeadm.go:310] 
	I0127 13:29:04.285636  529251 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 13:29:04.285723  529251 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 13:29:04.285731  529251 kubeadm.go:310] 
	I0127 13:29:04.285803  529251 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token cubj1b.pwpdo0hgbjp08kat \
	I0127 13:29:04.285958  529251 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:44e7ea386d1f8e7ab1336d835156dd84ecca20069390afc88f04bb1a3c629fd2 \
	I0127 13:29:04.285997  529251 kubeadm.go:310] 	--control-plane 
	I0127 13:29:04.286004  529251 kubeadm.go:310] 
	I0127 13:29:04.286115  529251 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 13:29:04.286121  529251 kubeadm.go:310] 
	I0127 13:29:04.286247  529251 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cubj1b.pwpdo0hgbjp08kat \
	I0127 13:29:04.286407  529251 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:44e7ea386d1f8e7ab1336d835156dd84ecca20069390afc88f04bb1a3c629fd2 
	I0127 13:29:04.286424  529251 cni.go:84] Creating CNI manager for ""
	I0127 13:29:04.286436  529251 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 13:29:04.288049  529251 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 13:29:02.250183  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:02.250724  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:29:02.250759  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:29:02.250691  531622 retry.go:31] will retry after 1.464046224s: waiting for domain to come up
	I0127 13:29:03.716441  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:03.716968  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:29:03.716995  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:29:03.716911  531622 retry.go:31] will retry after 1.473613486s: waiting for domain to come up
	I0127 13:29:05.192629  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:05.193220  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:29:05.193256  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:29:05.193184  531622 retry.go:31] will retry after 1.906374841s: waiting for domain to come up
	I0127 13:29:04.289218  529251 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 13:29:04.306228  529251 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 13:29:04.327835  529251 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 13:29:04.328008  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:04.328068  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-766944 minikube.k8s.io/updated_at=2025_01_27T13_29_04_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=0d71ce9b1959d04f0d9fa7dbc5639a49619ad89b minikube.k8s.io/name=embed-certs-766944 minikube.k8s.io/primary=true
	I0127 13:29:04.340778  529251 ops.go:34] apiserver oom_adj: -16
	I0127 13:29:04.617241  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:05.117682  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:05.618141  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:06.117679  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:06.618036  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:07.118302  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:07.618303  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:08.117464  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:08.221604  529251 kubeadm.go:1113] duration metric: took 3.893670046s to wait for elevateKubeSystemPrivileges
	I0127 13:29:08.221659  529251 kubeadm.go:394] duration metric: took 4m36.506709461s to StartCluster
	I0127 13:29:08.221687  529251 settings.go:142] acquiring lock: {Name:mk070ebf22d35da2704f00750921836dbd2cd121 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:29:08.221784  529251 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20317-466901/kubeconfig
	I0127 13:29:08.223152  529251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-466901/kubeconfig: {Name:mkc116eec378af43ea8fefe45e11af3e19be85bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:29:08.223468  529251 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 13:29:08.223561  529251 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 13:29:08.223686  529251 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-766944"
	I0127 13:29:08.223707  529251 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-766944"
	W0127 13:29:08.223715  529251 addons.go:247] addon storage-provisioner should already be in state true
	I0127 13:29:08.223720  529251 addons.go:69] Setting default-storageclass=true in profile "embed-certs-766944"
	I0127 13:29:08.223775  529251 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-766944"
	I0127 13:29:08.223766  529251 addons.go:69] Setting dashboard=true in profile "embed-certs-766944"
	I0127 13:29:08.223766  529251 addons.go:69] Setting metrics-server=true in profile "embed-certs-766944"
	I0127 13:29:08.223788  529251 config.go:182] Loaded profile config "embed-certs-766944": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 13:29:08.223797  529251 addons.go:238] Setting addon dashboard=true in "embed-certs-766944"
	I0127 13:29:08.223800  529251 addons.go:238] Setting addon metrics-server=true in "embed-certs-766944"
	W0127 13:29:08.223808  529251 addons.go:247] addon metrics-server should already be in state true
	W0127 13:29:08.223808  529251 addons.go:247] addon dashboard should already be in state true
	I0127 13:29:08.223748  529251 host.go:66] Checking if "embed-certs-766944" exists ...
	I0127 13:29:08.223840  529251 host.go:66] Checking if "embed-certs-766944" exists ...
	I0127 13:29:08.223862  529251 host.go:66] Checking if "embed-certs-766944" exists ...
	I0127 13:29:08.224260  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:08.224276  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:08.224288  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:08.224294  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:08.224311  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:08.224322  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:08.224276  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:08.224390  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:08.225260  529251 out.go:177] * Verifying Kubernetes components...
	I0127 13:29:08.226552  529251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:29:08.244300  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46203
	I0127 13:29:08.244514  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46005
	I0127 13:29:08.244516  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39053
	I0127 13:29:08.245012  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:08.245254  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:08.245333  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:08.245603  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:29:08.245621  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:08.245769  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:29:08.245780  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:08.245787  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:29:08.245804  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:08.246187  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:08.246236  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:08.246240  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:08.246450  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetState
	I0127 13:29:08.246858  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:08.246858  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:08.246898  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:08.246908  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:08.246957  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41683
	I0127 13:29:08.247392  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:08.248029  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:29:08.248055  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:08.248479  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:08.249163  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:08.249212  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:08.251401  529251 addons.go:238] Setting addon default-storageclass=true in "embed-certs-766944"
	W0127 13:29:08.251426  529251 addons.go:247] addon default-storageclass should already be in state true
	I0127 13:29:08.251459  529251 host.go:66] Checking if "embed-certs-766944" exists ...
	I0127 13:29:08.251834  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:08.251888  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:08.268388  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36109
	I0127 13:29:08.268957  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:08.269472  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:29:08.269488  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:08.269556  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43787
	I0127 13:29:08.269902  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:08.270014  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:08.270112  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetState
	I0127 13:29:08.270466  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:29:08.270483  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:08.270877  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:08.271178  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetState
	I0127 13:29:08.272419  529251 main.go:141] libmachine: (embed-certs-766944) Calling .DriverName
	I0127 13:29:08.273919  529251 main.go:141] libmachine: (embed-certs-766944) Calling .DriverName
	I0127 13:29:08.274603  529251 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 13:29:08.275601  529251 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:29:08.276632  529251 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 13:29:08.276650  529251 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 13:29:08.276675  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHHostname
	I0127 13:29:08.277578  529251 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:29:08.277591  529251 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 13:29:08.277605  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHHostname
	I0127 13:29:08.278681  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36115
	I0127 13:29:08.279322  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:08.280065  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:29:08.280083  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:08.280587  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:08.280859  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetState
	I0127 13:29:08.282532  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:29:08.282997  529251 main.go:141] libmachine: (embed-certs-766944) Calling .DriverName
	I0127 13:29:08.283505  529251 main.go:141] libmachine: (embed-certs-766944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4a:1b", ip: ""} in network mk-embed-certs-766944: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:21 +0000 UTC Type:0 Mac:52:54:00:73:4a:1b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:embed-certs-766944 Clientid:01:52:54:00:73:4a:1b}
	I0127 13:29:08.283533  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined IP address 192.168.39.24 and MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:29:08.283908  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHPort
	I0127 13:29:08.284083  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHKeyPath
	I0127 13:29:08.284241  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHUsername
	I0127 13:29:08.284285  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:29:08.284416  529251 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/embed-certs-766944/id_rsa Username:docker}
	I0127 13:29:08.284808  529251 main.go:141] libmachine: (embed-certs-766944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4a:1b", ip: ""} in network mk-embed-certs-766944: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:21 +0000 UTC Type:0 Mac:52:54:00:73:4a:1b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:embed-certs-766944 Clientid:01:52:54:00:73:4a:1b}
	I0127 13:29:08.284841  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined IP address 192.168.39.24 and MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:29:08.284853  529251 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 13:29:03.808549  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:05.809379  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:08.287154  529251 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 13:29:08.287385  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHPort
	I0127 13:29:08.287589  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHKeyPath
	I0127 13:29:08.287760  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHUsername
	I0127 13:29:08.287917  529251 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/embed-certs-766944/id_rsa Username:docker}
	I0127 13:29:08.288316  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 13:29:08.288338  529251 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 13:29:08.288353  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHHostname
	I0127 13:29:08.292370  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:29:08.292819  529251 main.go:141] libmachine: (embed-certs-766944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4a:1b", ip: ""} in network mk-embed-certs-766944: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:21 +0000 UTC Type:0 Mac:52:54:00:73:4a:1b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:embed-certs-766944 Clientid:01:52:54:00:73:4a:1b}
	I0127 13:29:08.292844  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined IP address 192.168.39.24 and MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:29:08.293148  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHPort
	I0127 13:29:08.293268  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41237
	I0127 13:29:08.293441  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHKeyPath
	I0127 13:29:08.293632  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHUsername
	I0127 13:29:08.293671  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:08.293763  529251 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/embed-certs-766944/id_rsa Username:docker}
	I0127 13:29:08.294180  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:29:08.294204  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:08.294614  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:08.295134  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:08.295170  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:08.312630  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35019
	I0127 13:29:08.313201  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:08.314043  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:29:08.314071  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:08.315352  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:08.315586  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetState
	I0127 13:29:08.317764  529251 main.go:141] libmachine: (embed-certs-766944) Calling .DriverName
	I0127 13:29:08.318043  529251 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 13:29:08.318064  529251 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 13:29:08.318087  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHHostname
	I0127 13:29:08.321585  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:29:08.322028  529251 main.go:141] libmachine: (embed-certs-766944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4a:1b", ip: ""} in network mk-embed-certs-766944: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:21 +0000 UTC Type:0 Mac:52:54:00:73:4a:1b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:embed-certs-766944 Clientid:01:52:54:00:73:4a:1b}
	I0127 13:29:08.322057  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined IP address 192.168.39.24 and MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:29:08.322200  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHPort
	I0127 13:29:08.322476  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHKeyPath
	I0127 13:29:08.322607  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHUsername
	I0127 13:29:08.322797  529251 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/embed-certs-766944/id_rsa Username:docker}
	I0127 13:29:08.543349  529251 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:29:08.566526  529251 node_ready.go:35] waiting up to 6m0s for node "embed-certs-766944" to be "Ready" ...
	I0127 13:29:08.581029  529251 node_ready.go:49] node "embed-certs-766944" has status "Ready":"True"
	I0127 13:29:08.581058  529251 node_ready.go:38] duration metric: took 14.437055ms for node "embed-certs-766944" to be "Ready" ...
	I0127 13:29:08.581072  529251 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:29:08.591111  529251 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:08.663492  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 13:29:08.663529  529251 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 13:29:08.708763  529251 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:29:08.731924  529251 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 13:29:08.733763  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 13:29:08.733792  529251 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 13:29:08.816600  529251 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 13:29:08.816646  529251 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 13:29:08.862311  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 13:29:08.862346  529251 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 13:29:08.881791  529251 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 13:29:08.881830  529251 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 13:29:08.965427  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 13:29:08.965468  529251 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 13:29:09.025682  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 13:29:09.025718  529251 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 13:29:09.026871  529251 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:29:09.026896  529251 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 13:29:09.106376  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 13:29:09.106408  529251 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 13:29:09.173153  529251 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:29:07.101069  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:07.101691  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:29:07.101724  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:29:07.101645  531622 retry.go:31] will retry after 3.3503886s: waiting for domain to come up
	I0127 13:29:10.454092  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:10.454611  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:29:10.454643  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:29:10.454550  531622 retry.go:31] will retry after 2.977667559s: waiting for domain to come up
	I0127 13:29:09.316157  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 13:29:09.316202  529251 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 13:29:09.518415  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 13:29:09.518455  529251 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 13:29:09.836886  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:29:09.836931  529251 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 13:29:09.974913  529251 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:29:10.529287  529251 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.820478856s)
	I0127 13:29:10.529346  529251 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.797380034s)
	I0127 13:29:10.529398  529251 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:10.529415  529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
	I0127 13:29:10.529355  529251 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:10.529488  529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
	I0127 13:29:10.529871  529251 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:10.529910  529251 main.go:141] libmachine: (embed-certs-766944) DBG | Closing plugin on server side
	I0127 13:29:10.529932  529251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:10.529943  529251 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:10.529951  529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
	I0127 13:29:10.529878  529251 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:10.530045  529251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:10.530070  529251 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:10.530088  529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
	I0127 13:29:10.530265  529251 main.go:141] libmachine: (embed-certs-766944) DBG | Closing plugin on server side
	I0127 13:29:10.530268  529251 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:10.530299  529251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:10.530463  529251 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:10.530482  529251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:10.599533  529251 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:10.599626  529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
	I0127 13:29:10.599978  529251 main.go:141] libmachine: (embed-certs-766944) DBG | Closing plugin on server side
	I0127 13:29:10.600095  529251 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:10.600128  529251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:10.613397  529251 pod_ready.go:103] pod "etcd-embed-certs-766944" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:11.025503  529251 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.852294623s)
	I0127 13:29:11.025583  529251 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:11.025598  529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
	I0127 13:29:11.025974  529251 main.go:141] libmachine: (embed-certs-766944) DBG | Closing plugin on server side
	I0127 13:29:11.026056  529251 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:11.026072  529251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:11.026081  529251 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:11.026094  529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
	I0127 13:29:11.026369  529251 main.go:141] libmachine: (embed-certs-766944) DBG | Closing plugin on server side
	I0127 13:29:11.026430  529251 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:11.026446  529251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:11.026465  529251 addons.go:479] Verifying addon metrics-server=true in "embed-certs-766944"
	I0127 13:29:11.846156  529251 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.871176785s)
	I0127 13:29:11.846235  529251 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:11.846258  529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
	I0127 13:29:11.846647  529251 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:11.846693  529251 main.go:141] libmachine: (embed-certs-766944) DBG | Closing plugin on server side
	I0127 13:29:11.846706  529251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:11.846720  529251 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:11.846730  529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
	I0127 13:29:11.847020  529251 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:11.847069  529251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:11.849004  529251 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-766944 addons enable metrics-server
	
	I0127 13:29:11.850858  529251 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 13:29:08.309241  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:10.806393  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:12.808038  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:11.852345  529251 addons.go:514] duration metric: took 3.628795827s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 13:29:13.097655  529251 pod_ready.go:103] pod "etcd-embed-certs-766944" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:13.433798  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:13.434282  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:29:13.434324  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:29:13.434271  531622 retry.go:31] will retry after 5.418420331s: waiting for domain to come up
	I0127 13:29:14.300254  529417 pod_ready.go:82] duration metric: took 4m0.000130065s for pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace to be "Ready" ...
	E0127 13:29:14.300291  529417 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0127 13:29:14.300324  529417 pod_ready.go:39] duration metric: took 4m12.210910321s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:29:14.300355  529417 kubeadm.go:597] duration metric: took 4m20.336267253s to restartPrimaryControlPlane
	W0127 13:29:14.300420  529417 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 13:29:14.300449  529417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 13:29:16.335301  529417 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.034816955s)
	I0127 13:29:16.335395  529417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:29:16.352998  529417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 13:29:16.365092  529417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:29:16.378733  529417 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:29:16.378758  529417 kubeadm.go:157] found existing configuration files:
	
	I0127 13:29:16.378804  529417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0127 13:29:16.395924  529417 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:29:16.396005  529417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:29:16.408496  529417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0127 13:29:16.418917  529417 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:29:16.418986  529417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:29:16.429065  529417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0127 13:29:16.439234  529417 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:29:16.439333  529417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:29:16.449865  529417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0127 13:29:16.460738  529417 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:29:16.460831  529417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:29:16.472411  529417 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 13:29:16.642625  529417 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 13:29:15.100860  529251 pod_ready.go:103] pod "etcd-embed-certs-766944" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:16.102026  529251 pod_ready.go:93] pod "etcd-embed-certs-766944" in "kube-system" namespace has status "Ready":"True"
	I0127 13:29:16.102064  529251 pod_ready.go:82] duration metric: took 7.510920671s for pod "etcd-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:16.102080  529251 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:16.108782  529251 pod_ready.go:93] pod "kube-apiserver-embed-certs-766944" in "kube-system" namespace has status "Ready":"True"
	I0127 13:29:16.108818  529251 pod_ready.go:82] duration metric: took 6.727536ms for pod "kube-apiserver-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:16.108832  529251 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:16.117964  529251 pod_ready.go:93] pod "kube-controller-manager-embed-certs-766944" in "kube-system" namespace has status "Ready":"True"
	I0127 13:29:16.117994  529251 pod_ready.go:82] duration metric: took 9.151947ms for pod "kube-controller-manager-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:16.118008  529251 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vp88s" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:16.125633  529251 pod_ready.go:93] pod "kube-proxy-vp88s" in "kube-system" namespace has status "Ready":"True"
	I0127 13:29:16.125657  529251 pod_ready.go:82] duration metric: took 7.641622ms for pod "kube-proxy-vp88s" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:16.125667  529251 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:16.141368  529251 pod_ready.go:93] pod "kube-scheduler-embed-certs-766944" in "kube-system" namespace has status "Ready":"True"
	I0127 13:29:16.141395  529251 pod_ready.go:82] duration metric: took 15.721182ms for pod "kube-scheduler-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:16.141403  529251 pod_ready.go:39] duration metric: took 7.560318089s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:29:16.141421  529251 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:29:16.141484  529251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:29:16.168318  529251 api_server.go:72] duration metric: took 7.944806249s to wait for apiserver process to appear ...
	I0127 13:29:16.168353  529251 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:29:16.168382  529251 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I0127 13:29:16.178242  529251 api_server.go:279] https://192.168.39.24:8443/healthz returned 200:
	ok
	I0127 13:29:16.179663  529251 api_server.go:141] control plane version: v1.32.1
	I0127 13:29:16.179696  529251 api_server.go:131] duration metric: took 11.33324ms to wait for apiserver health ...
	I0127 13:29:16.179706  529251 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:29:16.299895  529251 system_pods.go:59] 9 kube-system pods found
	I0127 13:29:16.299927  529251 system_pods.go:61] "coredns-668d6bf9bc-9h4k2" [0eb84d56-e399-4808-afda-b0e1ec4f201f] Running
	I0127 13:29:16.299933  529251 system_pods.go:61] "coredns-668d6bf9bc-wf444" [7afc402e-ab81-4eb5-b2cf-08be738f171d] Running
	I0127 13:29:16.299937  529251 system_pods.go:61] "etcd-embed-certs-766944" [22be64ef-9ba9-4750-aca9-f34b01b46f16] Running
	I0127 13:29:16.299941  529251 system_pods.go:61] "kube-apiserver-embed-certs-766944" [397082cc-acad-493c-8ddd-9f49def9100a] Running
	I0127 13:29:16.299945  529251 system_pods.go:61] "kube-controller-manager-embed-certs-766944" [fe84cf8b-7074-485b-a16e-d75b52b9fe15] Running
	I0127 13:29:16.299948  529251 system_pods.go:61] "kube-proxy-vp88s" [18e5bf87-73fb-43c4-a73e-b2f21a1bb7b8] Running
	I0127 13:29:16.299951  529251 system_pods.go:61] "kube-scheduler-embed-certs-766944" [96587dc6-6fbd-4d22-acfa-09a89f1e711a] Running
	I0127 13:29:16.299956  529251 system_pods.go:61] "metrics-server-f79f97bbb-27dz9" [9f604bd3-a953-4a12-b1bc-48e4e4c8bb4d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 13:29:16.299962  529251 system_pods.go:61] "storage-provisioner" [7d91f3a9-4b10-40fa-84bc-9d881d955319] Running
	I0127 13:29:16.299973  529251 system_pods.go:74] duration metric: took 120.259661ms to wait for pod list to return data ...
	I0127 13:29:16.299984  529251 default_sa.go:34] waiting for default service account to be created ...
	I0127 13:29:16.496603  529251 default_sa.go:45] found service account: "default"
	I0127 13:29:16.496645  529251 default_sa.go:55] duration metric: took 196.6512ms for default service account to be created ...
	I0127 13:29:16.496658  529251 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 13:29:16.702376  529251 system_pods.go:87] 9 kube-system pods found
	I0127 13:29:18.854257  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:18.854914  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has current primary IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:18.854944  531586 main.go:141] libmachine: (newest-cni-296225) found domain IP: 192.168.72.46
	I0127 13:29:18.854956  531586 main.go:141] libmachine: (newest-cni-296225) reserving static IP address...
	I0127 13:29:18.855436  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "newest-cni-296225", mac: "52:54:00:25:60:c9", ip: "192.168.72.46"} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:18.855466  531586 main.go:141] libmachine: (newest-cni-296225) DBG | skip adding static IP to network mk-newest-cni-296225 - found existing host DHCP lease matching {name: "newest-cni-296225", mac: "52:54:00:25:60:c9", ip: "192.168.72.46"}
	I0127 13:29:18.855480  531586 main.go:141] libmachine: (newest-cni-296225) reserved static IP address 192.168.72.46 for domain newest-cni-296225
	I0127 13:29:18.855493  531586 main.go:141] libmachine: (newest-cni-296225) waiting for SSH...
	I0127 13:29:18.855509  531586 main.go:141] libmachine: (newest-cni-296225) DBG | Getting to WaitForSSH function...
	I0127 13:29:18.858091  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:18.858477  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:18.858507  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:18.858705  531586 main.go:141] libmachine: (newest-cni-296225) DBG | Using SSH client type: external
	I0127 13:29:18.858725  531586 main.go:141] libmachine: (newest-cni-296225) DBG | Using SSH private key: /home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa (-rw-------)
	I0127 13:29:18.858760  531586 main.go:141] libmachine: (newest-cni-296225) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.46 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 13:29:18.858784  531586 main.go:141] libmachine: (newest-cni-296225) DBG | About to run SSH command:
	I0127 13:29:18.858806  531586 main.go:141] libmachine: (newest-cni-296225) DBG | exit 0
	I0127 13:29:18.996896  531586 main.go:141] libmachine: (newest-cni-296225) DBG | SSH cmd err, output: <nil>: 
	I0127 13:29:18.997263  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetConfigRaw
	I0127 13:29:18.998035  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetIP
	I0127 13:29:19.001537  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.001980  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:19.002005  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.002524  531586 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225/config.json ...
	I0127 13:29:19.002778  531586 machine.go:93] provisionDockerMachine start ...
	I0127 13:29:19.002804  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:29:19.003111  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:19.006300  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.006759  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:19.006788  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.006991  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
	I0127 13:29:19.007221  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:19.007434  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:19.007600  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
	I0127 13:29:19.007802  531586 main.go:141] libmachine: Using SSH client type: native
	I0127 13:29:19.008050  531586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.46 22 <nil> <nil>}
	I0127 13:29:19.008068  531586 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 13:29:19.124549  531586 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 13:29:19.124589  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetMachineName
	I0127 13:29:19.124921  531586 buildroot.go:166] provisioning hostname "newest-cni-296225"
	I0127 13:29:19.124953  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetMachineName
	I0127 13:29:19.125168  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:19.128509  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.128870  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:19.128904  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.129136  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
	I0127 13:29:19.129338  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:19.129489  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:19.129682  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
	I0127 13:29:19.129915  531586 main.go:141] libmachine: Using SSH client type: native
	I0127 13:29:19.130181  531586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.46 22 <nil> <nil>}
	I0127 13:29:19.130202  531586 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-296225 && echo "newest-cni-296225" | sudo tee /etc/hostname
	I0127 13:29:19.274181  531586 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-296225
	
	I0127 13:29:19.274233  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:19.277975  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.278540  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:19.278575  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.278963  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
	I0127 13:29:19.279243  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:19.279514  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:19.279686  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
	I0127 13:29:19.279898  531586 main.go:141] libmachine: Using SSH client type: native
	I0127 13:29:19.280149  531586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.46 22 <nil> <nil>}
	I0127 13:29:19.280176  531586 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-296225' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-296225/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-296225' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 13:29:19.425977  531586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 13:29:19.426016  531586 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20317-466901/.minikube CaCertPath:/home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20317-466901/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20317-466901/.minikube}
	I0127 13:29:19.426066  531586 buildroot.go:174] setting up certificates
	I0127 13:29:19.426080  531586 provision.go:84] configureAuth start
	I0127 13:29:19.426092  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetMachineName
	I0127 13:29:19.426372  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetIP
	I0127 13:29:19.429756  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.430201  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:19.430230  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.430467  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:19.432982  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.433352  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:19.433381  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.433508  531586 provision.go:143] copyHostCerts
	I0127 13:29:19.433596  531586 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-466901/.minikube/ca.pem, removing ...
	I0127 13:29:19.433613  531586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-466901/.minikube/ca.pem
	I0127 13:29:19.433713  531586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20317-466901/.minikube/ca.pem (1082 bytes)
	I0127 13:29:19.433862  531586 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-466901/.minikube/cert.pem, removing ...
	I0127 13:29:19.433898  531586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-466901/.minikube/cert.pem
	I0127 13:29:19.433952  531586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20317-466901/.minikube/cert.pem (1123 bytes)
	I0127 13:29:19.434069  531586 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-466901/.minikube/key.pem, removing ...
	I0127 13:29:19.434083  531586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-466901/.minikube/key.pem
	I0127 13:29:19.434121  531586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20317-466901/.minikube/key.pem (1675 bytes)
	I0127 13:29:19.434225  531586 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20317-466901/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca-key.pem org=jenkins.newest-cni-296225 san=[127.0.0.1 192.168.72.46 localhost minikube newest-cni-296225]
	I0127 13:29:19.616134  531586 provision.go:177] copyRemoteCerts
	I0127 13:29:19.616230  531586 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 13:29:19.616268  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:19.619632  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.620115  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:19.620170  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.620627  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
	I0127 13:29:19.620882  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:19.621062  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
	I0127 13:29:19.621267  531586 sshutil.go:53] new ssh client: &{IP:192.168.72.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa Username:docker}
	I0127 13:29:19.716453  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0127 13:29:19.751558  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0127 13:29:19.787164  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 13:29:19.822729  531586 provision.go:87] duration metric: took 396.632166ms to configureAuth
	I0127 13:29:19.822766  531586 buildroot.go:189] setting minikube options for container-runtime
	I0127 13:29:19.823021  531586 config.go:182] Loaded profile config "newest-cni-296225": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 13:29:19.823035  531586 machine.go:96] duration metric: took 820.241874ms to provisionDockerMachine
	I0127 13:29:19.823044  531586 start.go:293] postStartSetup for "newest-cni-296225" (driver="kvm2")
	I0127 13:29:19.823074  531586 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 13:29:19.823125  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:29:19.823524  531586 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 13:29:19.823610  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:19.826416  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.826837  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:19.826869  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.827189  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
	I0127 13:29:19.827424  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:19.827641  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
	I0127 13:29:19.827800  531586 sshutil.go:53] new ssh client: &{IP:192.168.72.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa Username:docker}
	I0127 13:29:19.922618  531586 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 13:29:19.927700  531586 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 13:29:19.927740  531586 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-466901/.minikube/addons for local assets ...
	I0127 13:29:19.927820  531586 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-466901/.minikube/files for local assets ...
	I0127 13:29:19.927920  531586 filesync.go:149] local asset: /home/jenkins/minikube-integration/20317-466901/.minikube/files/etc/ssl/certs/4742752.pem -> 4742752.pem in /etc/ssl/certs
	I0127 13:29:19.928047  531586 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 13:29:19.940393  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/files/etc/ssl/certs/4742752.pem --> /etc/ssl/certs/4742752.pem (1708 bytes)
	I0127 13:29:19.970138  531586 start.go:296] duration metric: took 147.059526ms for postStartSetup
	I0127 13:29:19.970186  531586 fix.go:56] duration metric: took 23.687879815s for fixHost
	I0127 13:29:19.970213  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:19.973696  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.974136  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:19.974162  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.974433  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
	I0127 13:29:19.974671  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:19.974863  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:19.975000  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
	I0127 13:29:19.975177  531586 main.go:141] libmachine: Using SSH client type: native
	I0127 13:29:19.975406  531586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.46 22 <nil> <nil>}
	I0127 13:29:19.975421  531586 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 13:29:20.097158  531586 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737984560.051374432
	
	I0127 13:29:20.097195  531586 fix.go:216] guest clock: 1737984560.051374432
	I0127 13:29:20.097205  531586 fix.go:229] Guest: 2025-01-27 13:29:20.051374432 +0000 UTC Remote: 2025-01-27 13:29:19.970191951 +0000 UTC m=+23.842107580 (delta=81.182481ms)
	I0127 13:29:20.097251  531586 fix.go:200] guest clock delta is within tolerance: 81.182481ms
	I0127 13:29:20.097264  531586 start.go:83] releasing machines lock for "newest-cni-296225", held for 23.814976228s
	I0127 13:29:20.097302  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:29:20.097604  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetIP
	I0127 13:29:20.101191  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:20.101642  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:20.101693  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:20.101991  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:29:20.102587  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:29:20.102797  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:29:20.102930  531586 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 13:29:20.102980  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:20.103025  531586 ssh_runner.go:195] Run: cat /version.json
	I0127 13:29:20.103054  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:20.106331  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:20.106785  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:20.106843  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:20.106883  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:20.107100  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
	I0127 13:29:20.107355  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:20.107415  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:20.107456  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:20.107545  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
	I0127 13:29:20.107711  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
	I0127 13:29:20.107752  531586 sshutil.go:53] new ssh client: &{IP:192.168.72.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa Username:docker}
	I0127 13:29:20.107851  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:20.108004  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
	I0127 13:29:20.108175  531586 sshutil.go:53] new ssh client: &{IP:192.168.72.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa Username:docker}
	I0127 13:29:20.198167  531586 ssh_runner.go:195] Run: systemctl --version
	I0127 13:29:20.220547  531586 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 13:29:20.228913  531586 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 13:29:20.229009  531586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 13:29:20.252220  531586 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 13:29:20.252252  531586 start.go:495] detecting cgroup driver to use...
	I0127 13:29:20.252336  531586 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 13:29:20.290040  531586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 13:29:20.307723  531586 docker.go:217] disabling cri-docker service (if available) ...
	I0127 13:29:20.307812  531586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 13:29:20.323473  531586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 13:29:20.339833  531586 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 13:29:20.476188  531586 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 13:29:20.632180  531586 docker.go:233] disabling docker service ...
	I0127 13:29:20.632272  531586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 13:29:20.647480  531586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 13:29:20.662456  531586 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 13:29:20.849643  531586 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 13:29:21.014719  531586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 13:29:21.034260  531586 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 13:29:21.055949  531586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 13:29:21.068764  531586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 13:29:21.083524  531586 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 13:29:21.083605  531586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 13:29:21.098914  531586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 13:29:21.113664  531586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 13:29:21.127826  531586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 13:29:21.139382  531586 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 13:29:21.151342  531586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 13:29:21.162384  531586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 13:29:21.174714  531586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 13:29:21.188361  531586 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 13:29:21.201837  531586 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 13:29:21.201921  531586 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 13:29:21.216404  531586 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 13:29:21.226169  531586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:29:21.347858  531586 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 13:29:21.387449  531586 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0127 13:29:21.387582  531586 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 13:29:21.393515  531586 retry.go:31] will retry after 514.05687ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0127 13:29:21.908225  531586 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 13:29:21.917708  531586 start.go:563] Will wait 60s for crictl version
	I0127 13:29:21.917786  531586 ssh_runner.go:195] Run: which crictl
	I0127 13:29:21.923989  531586 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 13:29:21.981569  531586 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0127 13:29:21.981675  531586 ssh_runner.go:195] Run: containerd --version
	I0127 13:29:22.027649  531586 ssh_runner.go:195] Run: containerd --version
	I0127 13:29:22.060339  531586 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	I0127 13:29:22.061787  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetIP
	I0127 13:29:22.065481  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:22.065908  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:22.065946  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:22.066183  531586 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0127 13:29:22.070907  531586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:29:22.089788  531586 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0127 13:29:25.581414  529417 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 13:29:25.581498  529417 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 13:29:25.581603  529417 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 13:29:25.581744  529417 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 13:29:25.581857  529417 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 13:29:25.581911  529417 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 13:29:25.583668  529417 out.go:235]   - Generating certificates and keys ...
	I0127 13:29:25.583784  529417 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 13:29:25.583864  529417 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 13:29:25.583999  529417 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 13:29:25.584094  529417 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 13:29:25.584212  529417 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 13:29:25.584290  529417 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 13:29:25.584368  529417 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 13:29:25.584490  529417 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 13:29:25.584607  529417 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 13:29:25.584736  529417 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 13:29:25.584797  529417 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 13:29:25.584859  529417 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 13:29:25.584911  529417 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 13:29:25.584981  529417 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 13:29:25.585070  529417 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 13:29:25.585182  529417 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 13:29:25.585291  529417 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 13:29:25.585425  529417 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 13:29:25.585505  529417 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 13:29:25.587922  529417 out.go:235]   - Booting up control plane ...
	I0127 13:29:25.588008  529417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 13:29:25.588109  529417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 13:29:25.588212  529417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 13:29:25.588306  529417 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 13:29:25.588407  529417 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 13:29:25.588476  529417 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 13:29:25.588653  529417 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 13:29:25.588744  529417 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 13:29:25.588806  529417 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.424535ms
	I0127 13:29:25.588894  529417 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 13:29:25.588947  529417 kubeadm.go:310] [api-check] The API server is healthy after 6.003546574s
	I0127 13:29:25.589042  529417 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 13:29:25.589188  529417 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 13:29:25.589243  529417 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 13:29:25.589423  529417 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-325510 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 13:29:25.589477  529417 kubeadm.go:310] [bootstrap-token] Using token: pmveah.4ebz9u5xjcadsa8l
	I0127 13:29:25.590661  529417 out.go:235]   - Configuring RBAC rules ...
	I0127 13:29:25.590772  529417 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 13:29:25.590884  529417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 13:29:25.591076  529417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 13:29:25.591309  529417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 13:29:25.591477  529417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 13:29:25.591601  529417 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 13:29:25.591734  529417 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 13:29:25.591810  529417 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 13:29:25.591869  529417 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 13:29:25.591879  529417 kubeadm.go:310] 
	I0127 13:29:25.591954  529417 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 13:29:25.591974  529417 kubeadm.go:310] 
	I0127 13:29:25.592097  529417 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 13:29:25.592115  529417 kubeadm.go:310] 
	I0127 13:29:25.592151  529417 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 13:29:25.592237  529417 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 13:29:25.592327  529417 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 13:29:25.592337  529417 kubeadm.go:310] 
	I0127 13:29:25.592390  529417 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 13:29:25.592397  529417 kubeadm.go:310] 
	I0127 13:29:25.592435  529417 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 13:29:25.592439  529417 kubeadm.go:310] 
	I0127 13:29:25.592512  529417 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 13:29:25.592614  529417 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 13:29:25.592674  529417 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 13:29:25.592682  529417 kubeadm.go:310] 
	I0127 13:29:25.592801  529417 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 13:29:25.592928  529417 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 13:29:25.592941  529417 kubeadm.go:310] 
	I0127 13:29:25.593032  529417 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token pmveah.4ebz9u5xjcadsa8l \
	I0127 13:29:25.593158  529417 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:44e7ea386d1f8e7ab1336d835156dd84ecca20069390afc88f04bb1a3c629fd2 \
	I0127 13:29:25.593193  529417 kubeadm.go:310] 	--control-plane 
	I0127 13:29:25.593206  529417 kubeadm.go:310] 
	I0127 13:29:25.593328  529417 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 13:29:25.593347  529417 kubeadm.go:310] 
	I0127 13:29:25.593453  529417 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token pmveah.4ebz9u5xjcadsa8l \
	I0127 13:29:25.593643  529417 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:44e7ea386d1f8e7ab1336d835156dd84ecca20069390afc88f04bb1a3c629fd2 
	I0127 13:29:25.593663  529417 cni.go:84] Creating CNI manager for ""
	I0127 13:29:25.593674  529417 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 13:29:25.595331  529417 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 13:29:22.091203  531586 kubeadm.go:883] updating cluster {Name:newest-cni-296225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-296225 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.46 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 13:29:22.091437  531586 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 13:29:22.091524  531586 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:29:22.133513  531586 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 13:29:22.133543  531586 containerd.go:534] Images already preloaded, skipping extraction
	I0127 13:29:22.133614  531586 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:29:22.172620  531586 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 13:29:22.172654  531586 cache_images.go:84] Images are preloaded, skipping loading
	I0127 13:29:22.172666  531586 kubeadm.go:934] updating node { 192.168.72.46 8443 v1.32.1 containerd true true} ...
	I0127 13:29:22.172814  531586 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-296225 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.46
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:newest-cni-296225 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 13:29:22.172904  531586 ssh_runner.go:195] Run: sudo crictl info
	I0127 13:29:22.221421  531586 cni.go:84] Creating CNI manager for ""
	I0127 13:29:22.221446  531586 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 13:29:22.221457  531586 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0127 13:29:22.221483  531586 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.46 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-296225 NodeName:newest-cni-296225 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.46"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.46 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 13:29:22.221619  531586 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.46
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-296225"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.46"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.46"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 13:29:22.221696  531586 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 13:29:22.233206  531586 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 13:29:22.233298  531586 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 13:29:22.247498  531586 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0127 13:29:22.265563  531586 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 13:29:22.283377  531586 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I0127 13:29:22.304627  531586 ssh_runner.go:195] Run: grep 192.168.72.46	control-plane.minikube.internal$ /etc/hosts
	I0127 13:29:22.310093  531586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.46	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:29:22.328149  531586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:29:22.474894  531586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:29:22.498792  531586 certs.go:68] Setting up /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225 for IP: 192.168.72.46
	I0127 13:29:22.498819  531586 certs.go:194] generating shared ca certs ...
	I0127 13:29:22.498848  531586 certs.go:226] acquiring lock for ca certs: {Name:mk60f2aac78eb363c5e06a00675357d94c0df88d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:29:22.499080  531586 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20317-466901/.minikube/ca.key
	I0127 13:29:22.499144  531586 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20317-466901/.minikube/proxy-client-ca.key
	I0127 13:29:22.499160  531586 certs.go:256] generating profile certs ...
	I0127 13:29:22.499295  531586 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225/client.key
	I0127 13:29:22.499368  531586 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225/apiserver.key.1b824597
	I0127 13:29:22.499428  531586 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225/proxy-client.key
	I0127 13:29:22.499576  531586 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/474275.pem (1338 bytes)
	W0127 13:29:22.499617  531586 certs.go:480] ignoring /home/jenkins/minikube-integration/20317-466901/.minikube/certs/474275_empty.pem, impossibly tiny 0 bytes
	I0127 13:29:22.499632  531586 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 13:29:22.499663  531586 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem (1082 bytes)
	I0127 13:29:22.499700  531586 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/cert.pem (1123 bytes)
	I0127 13:29:22.499734  531586 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/key.pem (1675 bytes)
	I0127 13:29:22.499790  531586 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/files/etc/ssl/certs/4742752.pem (1708 bytes)
	I0127 13:29:22.500650  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 13:29:22.551481  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 13:29:22.590593  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 13:29:22.630918  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 13:29:22.660478  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0127 13:29:22.696686  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 13:29:22.724193  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 13:29:22.752949  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 13:29:22.784814  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 13:29:22.812321  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/certs/474275.pem --> /usr/share/ca-certificates/474275.pem (1338 bytes)
	I0127 13:29:22.842249  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/files/etc/ssl/certs/4742752.pem --> /usr/share/ca-certificates/4742752.pem (1708 bytes)
	I0127 13:29:22.872391  531586 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 13:29:22.898310  531586 ssh_runner.go:195] Run: openssl version
	I0127 13:29:22.905518  531586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 13:29:22.917623  531586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:29:22.922904  531586 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 12:10 /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:29:22.922982  531586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:29:22.929666  531586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 13:29:22.941982  531586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/474275.pem && ln -fs /usr/share/ca-certificates/474275.pem /etc/ssl/certs/474275.pem"
	I0127 13:29:22.955315  531586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/474275.pem
	I0127 13:29:22.962079  531586 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 12:18 /usr/share/ca-certificates/474275.pem
	I0127 13:29:22.962157  531586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/474275.pem
	I0127 13:29:22.971599  531586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/474275.pem /etc/ssl/certs/51391683.0"
	I0127 13:29:22.985012  531586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4742752.pem && ln -fs /usr/share/ca-certificates/4742752.pem /etc/ssl/certs/4742752.pem"
	I0127 13:29:22.998788  531586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4742752.pem
	I0127 13:29:23.005232  531586 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 12:18 /usr/share/ca-certificates/4742752.pem
	I0127 13:29:23.005312  531586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4742752.pem
	I0127 13:29:23.013471  531586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4742752.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 13:29:23.028126  531586 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 13:29:23.033971  531586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 13:29:23.041089  531586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 13:29:23.048533  531586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 13:29:23.056641  531586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 13:29:23.065453  531586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 13:29:23.074452  531586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 13:29:23.083360  531586 kubeadm.go:392] StartCluster: {Name:newest-cni-296225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-296225 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.46 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: S
ubnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:29:23.083511  531586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0127 13:29:23.083604  531586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:29:23.138902  531586 cri.go:89] found id: "d766c6e246b1ecdf40d93c7468225dc4fad90ed651b6fe2c936d5fcc3d3d5170"
	I0127 13:29:23.138937  531586 cri.go:89] found id: "b555d58cf5206b0a6fd83f0168cfb804792c0d83433705e5ac60118320fece46"
	I0127 13:29:23.138941  531586 cri.go:89] found id: "ab62389a943f48c8078929c4529e496d71a5839cc3224de20672cab59cf3d31f"
	I0127 13:29:23.138945  531586 cri.go:89] found id: "e035113c19405073ac2218fc9137ccfb808c5e6b9a0a15344c76d9b3e648cf67"
	I0127 13:29:23.138947  531586 cri.go:89] found id: "7005555f3a67b3371c40da7f69569f4070f3d54977562479fd46b12e40341ee1"
	I0127 13:29:23.138952  531586 cri.go:89] found id: "5ff6602b0f8a4cd5e4cb51ca77e920e00b1c9e20d02131be56addb081e9027cc"
	I0127 13:29:23.138955  531586 cri.go:89] found id: "2614ff10025bce8287c075ab3139b2d06632fdb7cd672a7be31fbff64ffdea9b"
	I0127 13:29:23.138958  531586 cri.go:89] found id: ""
	I0127 13:29:23.139005  531586 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0127 13:29:23.161523  531586 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-27T13:29:23Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0127 13:29:23.161644  531586 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 13:29:23.177352  531586 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 13:29:23.177377  531586 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 13:29:23.177436  531586 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 13:29:23.190684  531586 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 13:29:23.191837  531586 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-296225" does not appear in /home/jenkins/minikube-integration/20317-466901/kubeconfig
	I0127 13:29:23.192568  531586 kubeconfig.go:62] /home/jenkins/minikube-integration/20317-466901/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-296225" cluster setting kubeconfig missing "newest-cni-296225" context setting]
	I0127 13:29:23.193462  531586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-466901/kubeconfig: {Name:mkc116eec378af43ea8fefe45e11af3e19be85bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:29:23.195884  531586 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 13:29:23.210992  531586 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.46
	I0127 13:29:23.211040  531586 kubeadm.go:1160] stopping kube-system containers ...
	I0127 13:29:23.211058  531586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0127 13:29:23.211141  531586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:29:23.266429  531586 cri.go:89] found id: "d766c6e246b1ecdf40d93c7468225dc4fad90ed651b6fe2c936d5fcc3d3d5170"
	I0127 13:29:23.266458  531586 cri.go:89] found id: "b555d58cf5206b0a6fd83f0168cfb804792c0d83433705e5ac60118320fece46"
	I0127 13:29:23.266464  531586 cri.go:89] found id: "ab62389a943f48c8078929c4529e496d71a5839cc3224de20672cab59cf3d31f"
	I0127 13:29:23.266468  531586 cri.go:89] found id: "e035113c19405073ac2218fc9137ccfb808c5e6b9a0a15344c76d9b3e648cf67"
	I0127 13:29:23.266472  531586 cri.go:89] found id: "7005555f3a67b3371c40da7f69569f4070f3d54977562479fd46b12e40341ee1"
	I0127 13:29:23.266477  531586 cri.go:89] found id: "5ff6602b0f8a4cd5e4cb51ca77e920e00b1c9e20d02131be56addb081e9027cc"
	I0127 13:29:23.266481  531586 cri.go:89] found id: "2614ff10025bce8287c075ab3139b2d06632fdb7cd672a7be31fbff64ffdea9b"
	I0127 13:29:23.266485  531586 cri.go:89] found id: ""
	I0127 13:29:23.266492  531586 cri.go:252] Stopping containers: [d766c6e246b1ecdf40d93c7468225dc4fad90ed651b6fe2c936d5fcc3d3d5170 b555d58cf5206b0a6fd83f0168cfb804792c0d83433705e5ac60118320fece46 ab62389a943f48c8078929c4529e496d71a5839cc3224de20672cab59cf3d31f e035113c19405073ac2218fc9137ccfb808c5e6b9a0a15344c76d9b3e648cf67 7005555f3a67b3371c40da7f69569f4070f3d54977562479fd46b12e40341ee1 5ff6602b0f8a4cd5e4cb51ca77e920e00b1c9e20d02131be56addb081e9027cc 2614ff10025bce8287c075ab3139b2d06632fdb7cd672a7be31fbff64ffdea9b]
	I0127 13:29:23.266560  531586 ssh_runner.go:195] Run: which crictl
	I0127 13:29:23.272382  531586 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 d766c6e246b1ecdf40d93c7468225dc4fad90ed651b6fe2c936d5fcc3d3d5170 b555d58cf5206b0a6fd83f0168cfb804792c0d83433705e5ac60118320fece46 ab62389a943f48c8078929c4529e496d71a5839cc3224de20672cab59cf3d31f e035113c19405073ac2218fc9137ccfb808c5e6b9a0a15344c76d9b3e648cf67 7005555f3a67b3371c40da7f69569f4070f3d54977562479fd46b12e40341ee1 5ff6602b0f8a4cd5e4cb51ca77e920e00b1c9e20d02131be56addb081e9027cc 2614ff10025bce8287c075ab3139b2d06632fdb7cd672a7be31fbff64ffdea9b
	I0127 13:29:23.324924  531586 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 13:29:23.345385  531586 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:29:23.359679  531586 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:29:23.359712  531586 kubeadm.go:157] found existing configuration files:
	
	I0127 13:29:23.359774  531586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:29:23.371542  531586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:29:23.371634  531586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:29:23.383083  531586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:29:23.393186  531586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:29:23.393267  531586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:29:23.406589  531586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:29:23.417348  531586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:29:23.417444  531586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:29:23.430008  531586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:29:23.441860  531586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:29:23.441965  531586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:29:23.452352  531586 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 13:29:23.463556  531586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:29:23.634151  531586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:29:24.791692  531586 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.15748875s)
	I0127 13:29:24.791732  531586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:29:25.027708  531586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:29:25.110706  531586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:29:25.211743  531586 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:29:25.211882  531586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:29:25.712041  531586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:29:25.596457  529417 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 13:29:25.611060  529417 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 13:29:25.631563  529417 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 13:29:25.631668  529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:25.631709  529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-325510 minikube.k8s.io/updated_at=2025_01_27T13_29_25_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=0d71ce9b1959d04f0d9fa7dbc5639a49619ad89b minikube.k8s.io/name=default-k8s-diff-port-325510 minikube.k8s.io/primary=true
	I0127 13:29:25.654141  529417 ops.go:34] apiserver oom_adj: -16
	I0127 13:29:25.885770  529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:26.386140  529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:26.885887  529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:27.386520  529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:27.886746  529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:28.386093  529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:28.523381  529417 kubeadm.go:1113] duration metric: took 2.89179334s to wait for elevateKubeSystemPrivileges
	I0127 13:29:28.523431  529417 kubeadm.go:394] duration metric: took 4m34.628614328s to StartCluster
	I0127 13:29:28.523462  529417 settings.go:142] acquiring lock: {Name:mk070ebf22d35da2704f00750921836dbd2cd121 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:29:28.523566  529417 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20317-466901/kubeconfig
	I0127 13:29:28.526181  529417 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-466901/kubeconfig: {Name:mkc116eec378af43ea8fefe45e11af3e19be85bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:29:28.526636  529417 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 13:29:28.526773  529417 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 13:29:28.526897  529417 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-325510"
	I0127 13:29:28.526920  529417 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-325510"
	I0127 13:29:28.526920  529417 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-325510"
	W0127 13:29:28.526930  529417 addons.go:247] addon storage-provisioner should already be in state true
	I0127 13:29:28.526943  529417 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-325510"
	I0127 13:29:28.526965  529417 host.go:66] Checking if "default-k8s-diff-port-325510" exists ...
	I0127 13:29:28.527036  529417 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-325510"
	I0127 13:29:28.527054  529417 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-325510"
	W0127 13:29:28.527061  529417 addons.go:247] addon dashboard should already be in state true
	I0127 13:29:28.527086  529417 host.go:66] Checking if "default-k8s-diff-port-325510" exists ...
	I0127 13:29:28.527083  529417 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-325510"
	I0127 13:29:28.527117  529417 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-325510"
	W0127 13:29:28.527128  529417 addons.go:247] addon metrics-server should already be in state true
	I0127 13:29:28.527164  529417 host.go:66] Checking if "default-k8s-diff-port-325510" exists ...
	I0127 13:29:28.527436  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:28.527441  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:28.526898  529417 config.go:182] Loaded profile config "default-k8s-diff-port-325510": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 13:29:28.527475  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:28.527490  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:28.527619  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:28.527655  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:28.527667  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:28.527700  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:28.528609  529417 out.go:177] * Verifying Kubernetes components...
	I0127 13:29:28.530189  529417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:29:28.546697  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35953
	I0127 13:29:28.547331  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:28.547485  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43545
	I0127 13:29:28.547528  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46213
	I0127 13:29:28.547893  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:28.548297  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:28.548482  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:29:28.548497  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:28.548832  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:28.549020  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetState
	I0127 13:29:28.549338  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:29:28.549354  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:28.549743  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35541
	I0127 13:29:28.549980  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:28.550227  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:29:28.550241  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:28.550306  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:28.550880  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:28.550926  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:28.551223  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:28.551394  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:29:28.551416  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:28.551971  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:28.552001  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:28.552189  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:28.552980  529417 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-325510"
	W0127 13:29:28.553005  529417 addons.go:247] addon default-storageclass should already be in state true
	I0127 13:29:28.553038  529417 host.go:66] Checking if "default-k8s-diff-port-325510" exists ...
	I0127 13:29:28.553380  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:28.553426  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:28.555977  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:28.556013  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:28.572312  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32967
	I0127 13:29:28.573004  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:28.573598  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:29:28.573617  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:28.573988  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:28.574040  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44851
	I0127 13:29:28.574171  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetState
	I0127 13:29:28.574508  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43399
	I0127 13:29:28.575096  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:28.575836  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:28.576253  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .DriverName
	I0127 13:29:28.576355  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:29:28.576375  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:28.577245  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:28.577419  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetState
	I0127 13:29:28.579103  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .DriverName
	I0127 13:29:28.579756  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:29:28.579779  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:28.580518  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42715
	I0127 13:29:28.580886  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:28.581173  529417 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 13:29:28.581406  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:29:28.581423  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:28.581695  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:28.581855  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetState
	I0127 13:29:28.582619  529417 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:29:28.583309  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .DriverName
	I0127 13:29:28.583662  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:28.584326  529417 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:29:28.584346  529417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 13:29:28.584368  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHHostname
	I0127 13:29:28.587322  529417 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 13:29:28.587999  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:28.588047  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:28.591379  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHPort
	I0127 13:29:28.591427  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:29:28.591456  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6c", ip: ""} in network mk-default-k8s-diff-port-325510: {Iface:virbr3 ExpiryTime:2025-01-27 14:24:40 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6c Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-325510 Clientid:01:52:54:00:c4:f9:6c}
	I0127 13:29:28.591496  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined IP address 192.168.61.7 and MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:29:28.591585  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHKeyPath
	I0127 13:29:28.591752  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHUsername
	I0127 13:29:28.591911  529417 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/default-k8s-diff-port-325510/id_rsa Username:docker}
	I0127 13:29:28.592584  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 13:29:28.592601  529417 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 13:29:28.592621  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHHostname
	I0127 13:29:28.593660  529417 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 13:29:26.212209  531586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:29:26.236202  531586 api_server.go:72] duration metric: took 1.024459251s to wait for apiserver process to appear ...
	I0127 13:29:26.236238  531586 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:29:26.236266  531586 api_server.go:253] Checking apiserver healthz at https://192.168.72.46:8443/healthz ...
	I0127 13:29:26.236911  531586 api_server.go:269] stopped: https://192.168.72.46:8443/healthz: Get "https://192.168.72.46:8443/healthz": dial tcp 192.168.72.46:8443: connect: connection refused
	I0127 13:29:26.737118  531586 api_server.go:253] Checking apiserver healthz at https://192.168.72.46:8443/healthz ...
	I0127 13:29:29.390944  531586 api_server.go:279] https://192.168.72.46:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 13:29:29.390990  531586 api_server.go:103] status: https://192.168.72.46:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 13:29:29.391010  531586 api_server.go:253] Checking apiserver healthz at https://192.168.72.46:8443/healthz ...
	I0127 13:29:29.446439  531586 api_server.go:279] https://192.168.72.46:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 13:29:29.446477  531586 api_server.go:103] status: https://192.168.72.46:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 13:29:29.737006  531586 api_server.go:253] Checking apiserver healthz at https://192.168.72.46:8443/healthz ...
	I0127 13:29:29.743881  531586 api_server.go:279] https://192.168.72.46:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:29:29.743915  531586 api_server.go:103] status: https://192.168.72.46:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:29:30.237168  531586 api_server.go:253] Checking apiserver healthz at https://192.168.72.46:8443/healthz ...
	I0127 13:29:30.251557  531586 api_server.go:279] https://192.168.72.46:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:29:30.251594  531586 api_server.go:103] status: https://192.168.72.46:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:29:30.737227  531586 api_server.go:253] Checking apiserver healthz at https://192.168.72.46:8443/healthz ...
	I0127 13:29:30.744425  531586 api_server.go:279] https://192.168.72.46:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:29:30.744461  531586 api_server.go:103] status: https://192.168.72.46:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:29:31.237274  531586 api_server.go:253] Checking apiserver healthz at https://192.168.72.46:8443/healthz ...
	I0127 13:29:31.244159  531586 api_server.go:279] https://192.168.72.46:8443/healthz returned 200:
	ok
	I0127 13:29:31.252139  531586 api_server.go:141] control plane version: v1.32.1
	I0127 13:29:31.252182  531586 api_server.go:131] duration metric: took 5.015933408s to wait for apiserver health ...
	I0127 13:29:31.252194  531586 cni.go:84] Creating CNI manager for ""
	I0127 13:29:31.252203  531586 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 13:29:31.253925  531586 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 13:29:31.255434  531586 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 13:29:31.267804  531586 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 13:29:31.293560  531586 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:29:31.313542  531586 system_pods.go:59] 8 kube-system pods found
	I0127 13:29:31.313590  531586 system_pods.go:61] "coredns-668d6bf9bc-xvbfh" [0d7c4469-d90e-4487-8433-1167183525e3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 13:29:31.313601  531586 system_pods.go:61] "etcd-newest-cni-296225" [97ed55b3-82a8-4ecf-a721-26a592f2c8cd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 13:29:31.313612  531586 system_pods.go:61] "kube-apiserver-newest-cni-296225" [d31606a7-2b78-4859-80a7-35b783b0a444] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 13:29:31.313621  531586 system_pods.go:61] "kube-controller-manager-newest-cni-296225" [4d6c4da8-a13a-44c2-a877-13b9453142a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 13:29:31.313631  531586 system_pods.go:61] "kube-proxy-dzvvc" [1ec15899-c7eb-436d-9e74-fadde7ecacb8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0127 13:29:31.313640  531586 system_pods.go:61] "kube-scheduler-newest-cni-296225" [2c230f78-68ac-4abb-9cdd-5cf666793981] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 13:29:31.313655  531586 system_pods.go:61] "metrics-server-f79f97bbb-2pv7p" [1246f427-ed62-4202-8170-5ae96be7ccf5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 13:29:31.313671  531586 system_pods.go:61] "storage-provisioner" [7b83dbf7-d497-42bb-9489-614ae5ba76fa] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 13:29:31.313680  531586 system_pods.go:74] duration metric: took 20.080673ms to wait for pod list to return data ...
	I0127 13:29:31.313709  531586 node_conditions.go:102] verifying NodePressure condition ...
	I0127 13:29:31.321205  531586 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 13:29:31.321236  531586 node_conditions.go:123] node cpu capacity is 2
	I0127 13:29:31.321251  531586 node_conditions.go:105] duration metric: took 7.532371ms to run NodePressure ...
	I0127 13:29:31.321276  531586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:29:31.758136  531586 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 13:29:31.783447  531586 ops.go:34] apiserver oom_adj: -16
	I0127 13:29:31.783539  531586 kubeadm.go:597] duration metric: took 8.606153189s to restartPrimaryControlPlane
	I0127 13:29:31.783582  531586 kubeadm.go:394] duration metric: took 8.700235213s to StartCluster
	I0127 13:29:31.783614  531586 settings.go:142] acquiring lock: {Name:mk070ebf22d35da2704f00750921836dbd2cd121 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:29:31.783739  531586 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20317-466901/kubeconfig
	I0127 13:29:31.786536  531586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-466901/kubeconfig: {Name:mkc116eec378af43ea8fefe45e11af3e19be85bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:29:31.786926  531586 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.46 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 13:29:31.787022  531586 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 13:29:31.787188  531586 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-296225"
	I0127 13:29:31.787308  531586 config.go:182] Loaded profile config "newest-cni-296225": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 13:29:31.787320  531586 addons.go:69] Setting metrics-server=true in profile "newest-cni-296225"
	I0127 13:29:31.787353  531586 addons.go:238] Setting addon metrics-server=true in "newest-cni-296225"
	W0127 13:29:31.787367  531586 addons.go:247] addon metrics-server should already be in state true
	I0127 13:29:31.787318  531586 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-296225"
	W0127 13:29:31.787388  531586 addons.go:247] addon storage-provisioner should already be in state true
	I0127 13:29:31.787413  531586 host.go:66] Checking if "newest-cni-296225" exists ...
	I0127 13:29:31.787446  531586 host.go:66] Checking if "newest-cni-296225" exists ...
	I0127 13:29:31.787286  531586 addons.go:69] Setting dashboard=true in profile "newest-cni-296225"
	I0127 13:29:31.787526  531586 addons.go:238] Setting addon dashboard=true in "newest-cni-296225"
	W0127 13:29:31.787557  531586 addons.go:247] addon dashboard should already be in state true
	I0127 13:29:31.787597  531586 host.go:66] Checking if "newest-cni-296225" exists ...
	I0127 13:29:31.787246  531586 addons.go:69] Setting default-storageclass=true in profile "newest-cni-296225"
	I0127 13:29:31.787654  531586 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-296225"
	I0127 13:29:31.787886  531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:31.787922  531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:31.787946  531586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:31.787971  531586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:31.788040  531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:31.788067  531586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:31.788279  531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:31.788348  531586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:31.791198  531586 out.go:177] * Verifying Kubernetes components...
	I0127 13:29:31.792729  531586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:29:31.809862  531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43917
	I0127 13:29:31.810576  531586 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:31.810735  531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43481
	I0127 13:29:31.811453  531586 main.go:141] libmachine: Using API Version  1
	I0127 13:29:31.811479  531586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:31.811565  531586 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:31.812009  531586 main.go:141] libmachine: Using API Version  1
	I0127 13:29:31.812033  531586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:31.812507  531586 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:31.814254  531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43261
	I0127 13:29:31.814774  531586 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:31.815750  531586 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:31.816710  531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:31.816754  531586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:31.817133  531586 main.go:141] libmachine: Using API Version  1
	I0127 13:29:31.817157  531586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:31.817572  531586 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:31.818143  531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:31.818200  531586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:31.819519  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetState
	I0127 13:29:31.824362  531586 addons.go:238] Setting addon default-storageclass=true in "newest-cni-296225"
	W0127 13:29:31.824386  531586 addons.go:247] addon default-storageclass should already be in state true
	I0127 13:29:31.824421  531586 host.go:66] Checking if "newest-cni-296225" exists ...
	I0127 13:29:31.824804  531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:31.824849  531586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:31.835403  531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33125
	I0127 13:29:31.836274  531586 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:31.836962  531586 main.go:141] libmachine: Using API Version  1
	I0127 13:29:31.836997  531586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:31.837484  531586 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:31.838061  531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:31.838106  531586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:31.839703  531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37671
	I0127 13:29:31.844903  531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36575
	I0127 13:29:31.850434  531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38467
	I0127 13:29:31.864579  531586 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:31.864731  531586 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:31.864805  531586 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:31.865332  531586 main.go:141] libmachine: Using API Version  1
	I0127 13:29:31.865353  531586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:31.865507  531586 main.go:141] libmachine: Using API Version  1
	I0127 13:29:31.865520  531586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:31.865755  531586 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:31.865888  531586 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:31.866153  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetState
	I0127 13:29:31.866263  531586 main.go:141] libmachine: Using API Version  1
	I0127 13:29:31.866280  531586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:31.866349  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetState
	I0127 13:29:31.866765  531586 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:31.867410  531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:31.867459  531586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:31.869030  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:29:31.870746  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:29:31.871229  531586 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:29:31.872679  531586 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 13:29:31.872852  531586 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:29:31.872877  531586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 13:29:31.872899  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:31.874840  531586 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 13:29:31.874867  531586 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 13:29:31.874889  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:31.879359  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:31.879992  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:31.880845  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:31.880876  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:31.880911  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:31.880935  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:31.881182  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
	I0127 13:29:31.881276  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
	I0127 13:29:31.881374  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:31.881423  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:31.881494  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
	I0127 13:29:31.881545  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
	I0127 13:29:31.881692  531586 sshutil.go:53] new ssh client: &{IP:192.168.72.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa Username:docker}
	I0127 13:29:31.881713  531586 sshutil.go:53] new ssh client: &{IP:192.168.72.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa Username:docker}
	I0127 13:29:31.890590  531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42247
	I0127 13:29:31.891311  531586 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:31.891961  531586 main.go:141] libmachine: Using API Version  1
	I0127 13:29:31.891983  531586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:31.892382  531586 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:31.892632  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetState
	I0127 13:29:31.894810  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:29:31.895223  531586 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 13:29:31.895240  531586 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 13:29:31.895450  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:31.895697  531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43877
	I0127 13:29:31.896698  531586 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:31.897633  531586 main.go:141] libmachine: Using API Version  1
	I0127 13:29:31.897658  531586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:31.898129  531586 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:31.898280  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetState
	I0127 13:29:31.899110  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:31.899759  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:31.899782  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:31.899962  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
	I0127 13:29:31.900155  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:31.900337  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
	I0127 13:29:31.900466  531586 sshutil.go:53] new ssh client: &{IP:192.168.72.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa Username:docker}
	I0127 13:29:31.904472  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:29:31.907054  531586 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 13:29:31.908332  531586 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 13:29:28.595128  529417 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 13:29:28.595147  529417 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 13:29:28.595179  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHHostname
	I0127 13:29:28.596235  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:29:28.597222  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6c", ip: ""} in network mk-default-k8s-diff-port-325510: {Iface:virbr3 ExpiryTime:2025-01-27 14:24:40 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6c Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-325510 Clientid:01:52:54:00:c4:f9:6c}
	I0127 13:29:28.597304  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined IP address 192.168.61.7 and MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:29:28.597628  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHPort
	I0127 13:29:28.597788  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHKeyPath
	I0127 13:29:28.597943  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHUsername
	I0127 13:29:28.598078  529417 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/default-k8s-diff-port-325510/id_rsa Username:docker}
	I0127 13:29:28.599130  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:29:28.599670  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6c", ip: ""} in network mk-default-k8s-diff-port-325510: {Iface:virbr3 ExpiryTime:2025-01-27 14:24:40 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6c Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-325510 Clientid:01:52:54:00:c4:f9:6c}
	I0127 13:29:28.599694  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined IP address 192.168.61.7 and MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:29:28.599880  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHPort
	I0127 13:29:28.600049  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHKeyPath
	I0127 13:29:28.600195  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHUsername
	I0127 13:29:28.600327  529417 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/default-k8s-diff-port-325510/id_rsa Username:docker}
	I0127 13:29:28.610825  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39231
	I0127 13:29:28.611379  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:28.611919  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:29:28.611939  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:28.612288  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:28.612480  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetState
	I0127 13:29:28.614326  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .DriverName
	I0127 13:29:28.614636  529417 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 13:29:28.614668  529417 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 13:29:28.614688  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHHostname
	I0127 13:29:28.618088  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:29:28.618805  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6c", ip: ""} in network mk-default-k8s-diff-port-325510: {Iface:virbr3 ExpiryTime:2025-01-27 14:24:40 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6c Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-325510 Clientid:01:52:54:00:c4:f9:6c}
	I0127 13:29:28.618958  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined IP address 192.168.61.7 and MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:29:28.619294  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHPort
	I0127 13:29:28.619517  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHKeyPath
	I0127 13:29:28.619738  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHUsername
	I0127 13:29:28.619953  529417 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/default-k8s-diff-port-325510/id_rsa Username:docker}
	I0127 13:29:28.750007  529417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:29:28.770798  529417 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-325510" to be "Ready" ...
	I0127 13:29:28.794753  529417 node_ready.go:49] node "default-k8s-diff-port-325510" has status "Ready":"True"
	I0127 13:29:28.794783  529417 node_ready.go:38] duration metric: took 23.945006ms for node "default-k8s-diff-port-325510" to be "Ready" ...
	I0127 13:29:28.794796  529417 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:29:28.801618  529417 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:28.841055  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 13:29:28.841089  529417 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 13:29:28.865445  529417 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 13:29:28.865479  529417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 13:29:28.870120  529417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:29:28.887649  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 13:29:28.887691  529417 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 13:29:28.908488  529417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 13:29:28.926717  529417 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 13:29:28.926752  529417 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 13:29:28.949234  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 13:29:28.949269  529417 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 13:29:28.983403  529417 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:29:28.983438  529417 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 13:29:29.010532  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 13:29:29.010567  529417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 13:29:29.085215  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 13:29:29.085250  529417 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 13:29:29.085479  529417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:29:29.180902  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 13:29:29.180935  529417 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 13:29:29.239792  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 13:29:29.239830  529417 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 13:29:29.350534  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 13:29:29.350566  529417 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 13:29:29.463271  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:29:29.463315  529417 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 13:29:29.551176  529417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:29:30.055621  529417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.147081618s)
	I0127 13:29:30.055704  529417 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:30.055723  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
	I0127 13:29:30.056191  529417 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:30.056215  529417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:30.056226  529417 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:30.056255  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | Closing plugin on server side
	I0127 13:29:30.056323  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
	I0127 13:29:30.056341  529417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.18618522s)
	I0127 13:29:30.056436  529417 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:30.056465  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
	I0127 13:29:30.056627  529417 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:30.056649  529417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:30.056963  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | Closing plugin on server side
	I0127 13:29:30.058774  529417 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:30.058792  529417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:30.058808  529417 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:30.058817  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
	I0127 13:29:30.059068  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | Closing plugin on server side
	I0127 13:29:30.059083  529417 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:30.059098  529417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:30.083977  529417 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:30.084003  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
	I0127 13:29:30.084571  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | Closing plugin on server side
	I0127 13:29:30.084583  529417 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:30.084595  529417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:30.830919  529417 pod_ready.go:103] pod "etcd-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:30.961132  529417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.875594685s)
	I0127 13:29:30.961202  529417 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:30.961219  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
	I0127 13:29:30.963600  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | Closing plugin on server side
	I0127 13:29:30.963608  529417 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:30.963645  529417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:30.963654  529417 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:30.963662  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
	I0127 13:29:30.964368  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | Closing plugin on server side
	I0127 13:29:30.964392  529417 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:30.964451  529417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:30.964463  529417 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-325510"
	I0127 13:29:32.478187  529417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.926948394s)
	I0127 13:29:32.478257  529417 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:32.478272  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
	I0127 13:29:32.478650  529417 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:32.478671  529417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:32.478683  529417 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:32.478693  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
	I0127 13:29:32.479015  529417 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:32.479033  529417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:32.482147  529417 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-325510 addons enable metrics-server
	
	I0127 13:29:32.483736  529417 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 13:29:32.484840  529417 addons.go:514] duration metric: took 3.958103252s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 13:29:31.909581  531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 13:29:31.909609  531586 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 13:29:31.909639  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:31.913216  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:31.913664  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:31.913695  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:31.913996  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
	I0127 13:29:31.914211  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:31.914377  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
	I0127 13:29:31.914514  531586 sshutil.go:53] new ssh client: &{IP:192.168.72.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa Username:docker}
	I0127 13:29:32.089563  531586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:29:32.127765  531586 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:29:32.127896  531586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:29:32.149480  531586 api_server.go:72] duration metric: took 362.501205ms to wait for apiserver process to appear ...
	I0127 13:29:32.149531  531586 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:29:32.149576  531586 api_server.go:253] Checking apiserver healthz at https://192.168.72.46:8443/healthz ...
	I0127 13:29:32.170573  531586 api_server.go:279] https://192.168.72.46:8443/healthz returned 200:
	ok
	I0127 13:29:32.171739  531586 api_server.go:141] control plane version: v1.32.1
	I0127 13:29:32.171771  531586 api_server.go:131] duration metric: took 22.230634ms to wait for apiserver health ...
	I0127 13:29:32.171784  531586 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:29:32.186307  531586 system_pods.go:59] 8 kube-system pods found
	I0127 13:29:32.186342  531586 system_pods.go:61] "coredns-668d6bf9bc-xvbfh" [0d7c4469-d90e-4487-8433-1167183525e3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 13:29:32.186349  531586 system_pods.go:61] "etcd-newest-cni-296225" [97ed55b3-82a8-4ecf-a721-26a592f2c8cd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 13:29:32.186360  531586 system_pods.go:61] "kube-apiserver-newest-cni-296225" [d31606a7-2b78-4859-80a7-35b783b0a444] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 13:29:32.186368  531586 system_pods.go:61] "kube-controller-manager-newest-cni-296225" [4d6c4da8-a13a-44c2-a877-13b9453142a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 13:29:32.186373  531586 system_pods.go:61] "kube-proxy-dzvvc" [1ec15899-c7eb-436d-9e74-fadde7ecacb8] Running
	I0127 13:29:32.186380  531586 system_pods.go:61] "kube-scheduler-newest-cni-296225" [2c230f78-68ac-4abb-9cdd-5cf666793981] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 13:29:32.186388  531586 system_pods.go:61] "metrics-server-f79f97bbb-2pv7p" [1246f427-ed62-4202-8170-5ae96be7ccf5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 13:29:32.186393  531586 system_pods.go:61] "storage-provisioner" [7b83dbf7-d497-42bb-9489-614ae5ba76fa] Running
	I0127 13:29:32.186408  531586 system_pods.go:74] duration metric: took 14.616708ms to wait for pod list to return data ...
	I0127 13:29:32.186420  531586 default_sa.go:34] waiting for default service account to be created ...
	I0127 13:29:32.194387  531586 default_sa.go:45] found service account: "default"
	I0127 13:29:32.194429  531586 default_sa.go:55] duration metric: took 7.999321ms for default service account to be created ...
	I0127 13:29:32.194447  531586 kubeadm.go:582] duration metric: took 407.475818ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 13:29:32.194469  531586 node_conditions.go:102] verifying NodePressure condition ...
	I0127 13:29:32.215128  531586 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 13:29:32.215228  531586 node_conditions.go:123] node cpu capacity is 2
	I0127 13:29:32.215257  531586 node_conditions.go:105] duration metric: took 20.782574ms to run NodePressure ...
	I0127 13:29:32.215325  531586 start.go:241] waiting for startup goroutines ...
	I0127 13:29:32.224708  531586 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 13:29:32.224738  531586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 13:29:32.233504  531586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 13:29:32.295258  531586 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 13:29:32.295311  531586 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 13:29:32.340500  531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 13:29:32.340623  531586 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 13:29:32.552816  531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 13:29:32.552969  531586 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 13:29:32.615247  531586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:29:32.615684  531586 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:29:32.615709  531586 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 13:29:32.772893  531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 13:29:32.772938  531586 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 13:29:32.831244  531586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:29:32.939523  531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 13:29:32.939558  531586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 13:29:33.121982  531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 13:29:33.122026  531586 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 13:29:33.248581  531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 13:29:33.248619  531586 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 13:29:33.339337  531586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.105786367s)
	I0127 13:29:33.339401  531586 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:33.339413  531586 main.go:141] libmachine: (newest-cni-296225) Calling .Close
	I0127 13:29:33.341380  531586 main.go:141] libmachine: (newest-cni-296225) DBG | Closing plugin on server side
	I0127 13:29:33.341463  531586 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:33.341484  531586 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:33.341498  531586 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:33.341511  531586 main.go:141] libmachine: (newest-cni-296225) Calling .Close
	I0127 13:29:33.342973  531586 main.go:141] libmachine: (newest-cni-296225) DBG | Closing plugin on server side
	I0127 13:29:33.342984  531586 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:33.342995  531586 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:33.350366  531586 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:33.350388  531586 main.go:141] libmachine: (newest-cni-296225) Calling .Close
	I0127 13:29:33.350671  531586 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:33.350685  531586 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:33.367462  531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 13:29:33.367490  531586 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 13:29:33.428952  531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 13:29:33.428989  531586 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 13:29:33.512094  531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:29:33.512127  531586 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 13:29:33.585612  531586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:29:34.628686  531586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.013367863s)
	I0127 13:29:34.628749  531586 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:34.628761  531586 main.go:141] libmachine: (newest-cni-296225) Calling .Close
	I0127 13:29:34.629106  531586 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:34.629133  531586 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:34.629143  531586 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:34.629153  531586 main.go:141] libmachine: (newest-cni-296225) Calling .Close
	I0127 13:29:34.629394  531586 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:34.629407  531586 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:34.834013  531586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.002708663s)
	I0127 13:29:34.834087  531586 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:34.834105  531586 main.go:141] libmachine: (newest-cni-296225) Calling .Close
	I0127 13:29:34.834399  531586 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:34.834418  531586 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:34.834427  531586 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:34.834435  531586 main.go:141] libmachine: (newest-cni-296225) Calling .Close
	I0127 13:29:34.834714  531586 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:34.834733  531586 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:34.834746  531586 addons.go:479] Verifying addon metrics-server=true in "newest-cni-296225"
	I0127 13:29:35.573250  531586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.987594335s)
	I0127 13:29:35.573316  531586 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:35.573332  531586 main.go:141] libmachine: (newest-cni-296225) Calling .Close
	I0127 13:29:35.573696  531586 main.go:141] libmachine: (newest-cni-296225) DBG | Closing plugin on server side
	I0127 13:29:35.573748  531586 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:35.573762  531586 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:35.573820  531586 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:35.573835  531586 main.go:141] libmachine: (newest-cni-296225) Calling .Close
	I0127 13:29:35.574254  531586 main.go:141] libmachine: (newest-cni-296225) DBG | Closing plugin on server side
	I0127 13:29:35.575985  531586 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:35.576005  531586 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:35.577914  531586 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-296225 addons enable metrics-server
	
	I0127 13:29:35.579611  531586 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0127 13:29:35.580983  531586 addons.go:514] duration metric: took 3.79397273s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0127 13:29:35.581031  531586 start.go:246] waiting for cluster config update ...
	I0127 13:29:35.581050  531586 start.go:255] writing updated cluster config ...
	I0127 13:29:35.581368  531586 ssh_runner.go:195] Run: rm -f paused
	I0127 13:29:35.638909  531586 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 13:29:35.640552  531586 out.go:177] * Done! kubectl is now configured to use "newest-cni-296225" cluster and "default" namespace by default
	I0127 13:29:33.314653  529417 pod_ready.go:103] pod "etcd-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:34.308087  529417 pod_ready.go:93] pod "etcd-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"True"
	I0127 13:29:34.308114  529417 pod_ready.go:82] duration metric: took 5.506466228s for pod "etcd-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:34.308126  529417 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:34.314009  529417 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"True"
	I0127 13:29:34.314033  529417 pod_ready.go:82] duration metric: took 5.900062ms for pod "kube-apiserver-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:34.314044  529417 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:34.321801  529417 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"True"
	I0127 13:29:34.321823  529417 pod_ready.go:82] duration metric: took 7.77255ms for pod "kube-controller-manager-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:34.321836  529417 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:36.328661  529417 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:38.833405  529417 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:39.331942  529417 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"True"
	I0127 13:29:39.331971  529417 pod_ready.go:82] duration metric: took 5.010119744s for pod "kube-scheduler-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:39.331983  529417 pod_ready.go:39] duration metric: took 10.537174991s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:29:39.332004  529417 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:29:39.332061  529417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:29:39.364826  529417 api_server.go:72] duration metric: took 10.838138782s to wait for apiserver process to appear ...
	I0127 13:29:39.364856  529417 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:29:39.364880  529417 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0127 13:29:39.395339  529417 api_server.go:279] https://192.168.61.7:8444/healthz returned 200:
	ok
	I0127 13:29:39.403463  529417 api_server.go:141] control plane version: v1.32.1
	I0127 13:29:39.403502  529417 api_server.go:131] duration metric: took 38.63787ms to wait for apiserver health ...
	I0127 13:29:39.403515  529417 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:29:39.428974  529417 system_pods.go:59] 9 kube-system pods found
	I0127 13:29:39.429008  529417 system_pods.go:61] "coredns-668d6bf9bc-mgxmm" [15f65844-c002-4253-9f43-609e6d3d86c0] Running
	I0127 13:29:39.429013  529417 system_pods.go:61] "coredns-668d6bf9bc-rlvv2" [b116f02c-d30f-4869-bef1-55722f0f1a58] Running
	I0127 13:29:39.429016  529417 system_pods.go:61] "etcd-default-k8s-diff-port-325510" [88fd4825-b74c-43e0-8a3e-fd60bb409b76] Running
	I0127 13:29:39.429021  529417 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-325510" [4eeff905-b36f-4be8-ac24-77c8421495c4] Running
	I0127 13:29:39.429024  529417 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-325510" [07956b85-b521-44cc-be77-675703803a17] Running
	I0127 13:29:39.429027  529417 system_pods.go:61] "kube-proxy-gb24h" [d0d50b9f-b02f-49dd-9a7a-78e202ce247a] Running
	I0127 13:29:39.429031  529417 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-325510" [a7c2c0c5-c386-454d-9542-852b02901060] Running
	I0127 13:29:39.429037  529417 system_pods.go:61] "metrics-server-f79f97bbb-vtvnn" [07e0c335-6a2b-4ef3-b153-3689cdb7ccaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 13:29:39.429041  529417 system_pods.go:61] "storage-provisioner" [7b76ca76-2bfc-44c4-bfc3-5ac3f4cde72b] Running
	I0127 13:29:39.429048  529417 system_pods.go:74] duration metric: took 25.526569ms to wait for pod list to return data ...
	I0127 13:29:39.429056  529417 default_sa.go:34] waiting for default service account to be created ...
	I0127 13:29:39.449041  529417 default_sa.go:45] found service account: "default"
	I0127 13:29:39.449083  529417 default_sa.go:55] duration metric: took 20.019081ms for default service account to be created ...
	I0127 13:29:39.449098  529417 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 13:29:39.468326  529417 system_pods.go:87] 9 kube-system pods found
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	2c59218aeb0b4       523cad1a4df73       38 seconds ago      Exited              dashboard-metrics-scraper   9                   b97b8e84adc01       dashboard-metrics-scraper-86c6bf9756-whltq
	63d1c3b56e594       07655ddf2eebe       21 minutes ago      Running             kubernetes-dashboard        0                   f131828d89e6e       kubernetes-dashboard-7779f9b69b-l74bx
	69d92ad422477       6e38f40d628db       21 minutes ago      Running             storage-provisioner         0                   46d2d2a34739d       storage-provisioner
	f328b03590da3       c69fa2e9cbf5f       21 minutes ago      Running             coredns                     0                   193a5d0860335       coredns-668d6bf9bc-4qzkt
	c124cf3989669       c69fa2e9cbf5f       21 minutes ago      Running             coredns                     0                   0c04e67152ad0       coredns-668d6bf9bc-hpb7s
	310d5e851b70e       e29f9c7391fd9       22 minutes ago      Running             kube-proxy                  0                   ef472172035c0       kube-proxy-sxztd
	f6cbefb95932d       a9e7e6b294baf       22 minutes ago      Running             etcd                        2                   712e1d46f9460       etcd-no-preload-325431
	8fc79b79be3e9       95c0bda56fc4d       22 minutes ago      Running             kube-apiserver              2                   7eb1a821a76e9       kube-apiserver-no-preload-325431
	9c420da9d39ea       2b0d6572d062c       22 minutes ago      Running             kube-scheduler              2                   bbb87051682aa       kube-scheduler-no-preload-325431
	08725f33f2201       019ee182b58e2       22 minutes ago      Running             kube-controller-manager     2                   9d90f8ac6f519       kube-controller-manager-no-preload-325431
	
	
	==> containerd <==
	Jan 27 13:44:42 no-preload-325431 containerd[556]: time="2025-01-27T13:44:42.196435020Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Jan 27 13:44:42 no-preload-325431 containerd[556]: time="2025-01-27T13:44:42.196676418Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 27 13:44:53 no-preload-325431 containerd[556]: time="2025-01-27T13:44:53.186537465Z" level=info msg="CreateContainer within sandbox \"b97b8e84adc017d4e671df39270b8b353d95d6d8c37314624eb1fc6e6a6ca4f1\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,}"
	Jan 27 13:44:53 no-preload-325431 containerd[556]: time="2025-01-27T13:44:53.216616165Z" level=info msg="CreateContainer within sandbox \"b97b8e84adc017d4e671df39270b8b353d95d6d8c37314624eb1fc6e6a6ca4f1\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,} returns container id \"75e634727176de37a035b7ccbeb3bacc76576d3f33522560713fd2dd4075a6c6\""
	Jan 27 13:44:53 no-preload-325431 containerd[556]: time="2025-01-27T13:44:53.217860425Z" level=info msg="StartContainer for \"75e634727176de37a035b7ccbeb3bacc76576d3f33522560713fd2dd4075a6c6\""
	Jan 27 13:44:53 no-preload-325431 containerd[556]: time="2025-01-27T13:44:53.301749074Z" level=info msg="StartContainer for \"75e634727176de37a035b7ccbeb3bacc76576d3f33522560713fd2dd4075a6c6\" returns successfully"
	Jan 27 13:44:53 no-preload-325431 containerd[556]: time="2025-01-27T13:44:53.361917812Z" level=info msg="shim disconnected" id=75e634727176de37a035b7ccbeb3bacc76576d3f33522560713fd2dd4075a6c6 namespace=k8s.io
	Jan 27 13:44:53 no-preload-325431 containerd[556]: time="2025-01-27T13:44:53.361994213Z" level=warning msg="cleaning up after shim disconnected" id=75e634727176de37a035b7ccbeb3bacc76576d3f33522560713fd2dd4075a6c6 namespace=k8s.io
	Jan 27 13:44:53 no-preload-325431 containerd[556]: time="2025-01-27T13:44:53.362004564Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 27 13:44:53 no-preload-325431 containerd[556]: time="2025-01-27T13:44:53.382397677Z" level=warning msg="cleanup warnings time=\"2025-01-27T13:44:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
	Jan 27 13:44:54 no-preload-325431 containerd[556]: time="2025-01-27T13:44:54.147046811Z" level=info msg="RemoveContainer for \"b976f57e1830de2e572ff5852dd68053d68d7485608441238b1d167515b5200c\""
	Jan 27 13:44:54 no-preload-325431 containerd[556]: time="2025-01-27T13:44:54.158663221Z" level=info msg="RemoveContainer for \"b976f57e1830de2e572ff5852dd68053d68d7485608441238b1d167515b5200c\" returns successfully"
	Jan 27 13:49:53 no-preload-325431 containerd[556]: time="2025-01-27T13:49:53.184063663Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 13:49:53 no-preload-325431 containerd[556]: time="2025-01-27T13:49:53.195148681Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Jan 27 13:49:53 no-preload-325431 containerd[556]: time="2025-01-27T13:49:53.197738662Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Jan 27 13:49:53 no-preload-325431 containerd[556]: time="2025-01-27T13:49:53.197792325Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 27 13:49:56 no-preload-325431 containerd[556]: time="2025-01-27T13:49:56.185892120Z" level=info msg="CreateContainer within sandbox \"b97b8e84adc017d4e671df39270b8b353d95d6d8c37314624eb1fc6e6a6ca4f1\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,}"
	Jan 27 13:49:56 no-preload-325431 containerd[556]: time="2025-01-27T13:49:56.213473862Z" level=info msg="CreateContainer within sandbox \"b97b8e84adc017d4e671df39270b8b353d95d6d8c37314624eb1fc6e6a6ca4f1\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,} returns container id \"2c59218aeb0b4d6b9b227cf4431b9d542a626a12d9b570a16068eeae269073c0\""
	Jan 27 13:49:56 no-preload-325431 containerd[556]: time="2025-01-27T13:49:56.214545707Z" level=info msg="StartContainer for \"2c59218aeb0b4d6b9b227cf4431b9d542a626a12d9b570a16068eeae269073c0\""
	Jan 27 13:49:56 no-preload-325431 containerd[556]: time="2025-01-27T13:49:56.300793920Z" level=info msg="StartContainer for \"2c59218aeb0b4d6b9b227cf4431b9d542a626a12d9b570a16068eeae269073c0\" returns successfully"
	Jan 27 13:49:56 no-preload-325431 containerd[556]: time="2025-01-27T13:49:56.354783382Z" level=info msg="shim disconnected" id=2c59218aeb0b4d6b9b227cf4431b9d542a626a12d9b570a16068eeae269073c0 namespace=k8s.io
	Jan 27 13:49:56 no-preload-325431 containerd[556]: time="2025-01-27T13:49:56.354850223Z" level=warning msg="cleaning up after shim disconnected" id=2c59218aeb0b4d6b9b227cf4431b9d542a626a12d9b570a16068eeae269073c0 namespace=k8s.io
	Jan 27 13:49:56 no-preload-325431 containerd[556]: time="2025-01-27T13:49:56.354862715Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 27 13:49:56 no-preload-325431 containerd[556]: time="2025-01-27T13:49:56.914494028Z" level=info msg="RemoveContainer for \"75e634727176de37a035b7ccbeb3bacc76576d3f33522560713fd2dd4075a6c6\""
	Jan 27 13:49:56 no-preload-325431 containerd[556]: time="2025-01-27T13:49:56.922285640Z" level=info msg="RemoveContainer for \"75e634727176de37a035b7ccbeb3bacc76576d3f33522560713fd2dd4075a6c6\" returns successfully"
	
	
	==> coredns [c124cf39896699b77317720c2e7e03c7013edb4a0c398425791784c0bb22c08a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [f328b03590da3b51a135d8436bb74ffaef7b999a0d57f694e8cd0ee45d9cd4fb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-325431
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-325431
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0d71ce9b1959d04f0d9fa7dbc5639a49619ad89b
	                    minikube.k8s.io/name=no-preload-325431
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T13_28_29_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 13:28:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-325431
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 13:50:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 13:49:22 +0000   Mon, 27 Jan 2025 13:28:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 13:49:22 +0000   Mon, 27 Jan 2025 13:28:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 13:49:22 +0000   Mon, 27 Jan 2025 13:28:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 13:49:22 +0000   Mon, 27 Jan 2025 13:28:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.116
	  Hostname:    no-preload-325431
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 102af14d504a46e9aa5f69946e6b1af9
	  System UUID:                102af14d-504a-46e9-aa5f-69946e6b1af9
	  Boot ID:                    baa560a6-23ce-43ec-bfff-051eeec1c311
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-4qzkt                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     22m
	  kube-system                 coredns-668d6bf9bc-hpb7s                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     22m
	  kube-system                 etcd-no-preload-325431                        100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         22m
	  kube-system                 kube-apiserver-no-preload-325431              250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-no-preload-325431     200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-sxztd                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-no-preload-325431              100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 metrics-server-f79f97bbb-z7vjh                100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         22m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-whltq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-l74bx         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 21m   kube-proxy       
	  Normal  Starting                 22m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m   kubelet          Node no-preload-325431 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m   kubelet          Node no-preload-325431 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m   kubelet          Node no-preload-325431 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           22m   node-controller  Node no-preload-325431 event: Registered Node no-preload-325431 in Controller
	
	
	==> dmesg <==
	[  +0.042610] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.986924] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.897118] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.653093] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.709826] systemd-fstab-generator[480]: Ignoring "noauto" option for root device
	[  +0.056055] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061448] systemd-fstab-generator[492]: Ignoring "noauto" option for root device
	[  +0.189315] systemd-fstab-generator[506]: Ignoring "noauto" option for root device
	[  +0.121647] systemd-fstab-generator[518]: Ignoring "noauto" option for root device
	[  +0.293864] systemd-fstab-generator[548]: Ignoring "noauto" option for root device
	[  +1.489196] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +2.327408] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[  +0.901291] kauditd_printk_skb: 225 callbacks suppressed
	[Jan27 13:24] kauditd_printk_skb: 40 callbacks suppressed
	[ +12.107985] kauditd_printk_skb: 82 callbacks suppressed
	[Jan27 13:28] systemd-fstab-generator[3094]: Ignoring "noauto" option for root device
	[  +7.082264] systemd-fstab-generator[3486]: Ignoring "noauto" option for root device
	[  +0.138438] kauditd_printk_skb: 87 callbacks suppressed
	[  +4.889129] systemd-fstab-generator[3574]: Ignoring "noauto" option for root device
	[  +0.116028] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.188036] kauditd_printk_skb: 84 callbacks suppressed
	[  +5.170159] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.775732] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [f6cbefb95932d1ca8f242ac48b345cd84e86e1645198bb4017ab78eb469c44c1] <==
	{"level":"info","ts":"2025-01-27T13:28:29.466666Z","caller":"traceutil/trace.go:171","msg":"trace[89735720] transaction","detail":"{read_only:false; response_revision:251; number_of_response:1; }","duration":"133.234985ms","start":"2025-01-27T13:28:29.325047Z","end":"2025-01-27T13:28:29.458282Z","steps":["trace[89735720] 'process raft request'  (duration: 65.367463ms)","trace[89735720] 'compare'  (duration: 67.453248ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T13:28:30.022627Z","caller":"traceutil/trace.go:171","msg":"trace[1688003952] transaction","detail":"{read_only:false; response_revision:267; number_of_response:1; }","duration":"124.963577ms","start":"2025-01-27T13:28:29.897637Z","end":"2025-01-27T13:28:30.022600Z","steps":["trace[1688003952] 'process raft request'  (duration: 41.933674ms)","trace[1688003952] 'compare'  (duration: 82.611068ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T13:28:45.692324Z","caller":"traceutil/trace.go:171","msg":"trace[1329335384] linearizableReadLoop","detail":"{readStateIndex:516; appliedIndex:515; }","duration":"153.549674ms","start":"2025-01-27T13:28:45.538747Z","end":"2025-01-27T13:28:45.692297Z","steps":["trace[1329335384] 'read index received'  (duration: 153.240994ms)","trace[1329335384] 'applied index is now lower than readState.Index'  (duration: 308.026µs)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T13:28:45.692563Z","caller":"traceutil/trace.go:171","msg":"trace[1413168219] transaction","detail":"{read_only:false; response_revision:501; number_of_response:1; }","duration":"180.289564ms","start":"2025-01-27T13:28:45.512261Z","end":"2025-01-27T13:28:45.692550Z","steps":["trace[1413168219] 'process raft request'  (duration: 179.777083ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T13:28:45.692632Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.862618ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T13:28:45.694495Z","caller":"traceutil/trace.go:171","msg":"trace[1326562105] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:501; }","duration":"155.73594ms","start":"2025-01-27T13:28:45.538719Z","end":"2025-01-27T13:28:45.694455Z","steps":["trace[1326562105] 'agreement among raft nodes before linearized reading'  (duration: 153.869235ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T13:28:48.289928Z","caller":"traceutil/trace.go:171","msg":"trace[371225448] linearizableReadLoop","detail":"{readStateIndex:523; appliedIndex:523; }","duration":"102.015314ms","start":"2025-01-27T13:28:48.187893Z","end":"2025-01-27T13:28:48.289908Z","steps":["trace[371225448] 'read index received'  (duration: 102.010597ms)","trace[371225448] 'applied index is now lower than readState.Index'  (duration: 3.703µs)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T13:28:48.292572Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.11503ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-f79f97bbb-z7vjh.181e8fb584d28d42\" limit:1 ","response":"range_response_count:1 size:814"}
	{"level":"info","ts":"2025-01-27T13:28:48.292612Z","caller":"traceutil/trace.go:171","msg":"trace[1018723515] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-f79f97bbb-z7vjh.181e8fb584d28d42; range_end:; response_count:1; response_revision:507; }","duration":"101.275248ms","start":"2025-01-27T13:28:48.191323Z","end":"2025-01-27T13:28:48.292598Z","steps":["trace[1018723515] 'agreement among raft nodes before linearized reading'  (duration: 101.163297ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T13:28:48.292960Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.053319ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-f79f97bbb-z7vjh\" limit:1 ","response":"range_response_count:1 size:4559"}
	{"level":"info","ts":"2025-01-27T13:28:48.292997Z","caller":"traceutil/trace.go:171","msg":"trace[225346907] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-f79f97bbb-z7vjh; range_end:; response_count:1; response_revision:507; }","duration":"105.099208ms","start":"2025-01-27T13:28:48.187887Z","end":"2025-01-27T13:28:48.292986Z","steps":["trace[225346907] 'agreement among raft nodes before linearized reading'  (duration: 105.017786ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T13:28:48.289515Z","caller":"traceutil/trace.go:171","msg":"trace[142542872] transaction","detail":"{read_only:false; response_revision:507; number_of_response:1; }","duration":"113.448527ms","start":"2025-01-27T13:28:48.176045Z","end":"2025-01-27T13:28:48.289493Z","steps":["trace[142542872] 'process raft request'  (duration: 113.278071ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T13:28:48.452957Z","caller":"traceutil/trace.go:171","msg":"trace[1333608902] transaction","detail":"{read_only:false; response_revision:509; number_of_response:1; }","duration":"115.97828ms","start":"2025-01-27T13:28:48.336968Z","end":"2025-01-27T13:28:48.452946Z","steps":["trace[1333608902] 'process raft request'  (duration: 115.606445ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T13:28:48.453348Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.781956ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T13:28:48.453378Z","caller":"traceutil/trace.go:171","msg":"trace[232578695] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:509; }","duration":"112.859513ms","start":"2025-01-27T13:28:48.340511Z","end":"2025-01-27T13:28:48.453370Z","steps":["trace[232578695] 'agreement among raft nodes before linearized reading'  (duration: 112.79216ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T13:28:48.453259Z","caller":"traceutil/trace.go:171","msg":"trace[662514671] linearizableReadLoop","detail":"{readStateIndex:525; appliedIndex:524; }","duration":"112.118516ms","start":"2025-01-27T13:28:48.340584Z","end":"2025-01-27T13:28:48.452702Z","steps":["trace[662514671] 'read index received'  (duration: 3.389634ms)","trace[662514671] 'applied index is now lower than readState.Index'  (duration: 108.728402ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T13:38:23.222020Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":830}
	{"level":"info","ts":"2025-01-27T13:38:23.265148Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":830,"took":"41.343905ms","hash":1792461569,"current-db-size-bytes":2740224,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2740224,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2025-01-27T13:38:23.265777Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1792461569,"revision":830,"compact-revision":-1}
	{"level":"info","ts":"2025-01-27T13:43:23.230793Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1081}
	{"level":"info","ts":"2025-01-27T13:43:23.236676Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1081,"took":"4.865267ms","hash":2239879784,"current-db-size-bytes":2740224,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1839104,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-27T13:43:23.236926Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2239879784,"revision":1081,"compact-revision":830}
	{"level":"info","ts":"2025-01-27T13:48:23.248481Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1341}
	{"level":"info","ts":"2025-01-27T13:48:23.253651Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1341,"took":"4.444244ms","hash":281533610,"current-db-size-bytes":2740224,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1863680,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-01-27T13:48:23.253852Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":281533610,"revision":1341,"compact-revision":1081}
	
	
	==> kernel <==
	 13:50:34 up 26 min,  0 users,  load average: 0.09, 0.21, 0.21
	Linux no-preload-325431 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8fc79b79be3e960c95d6b40d47a560e4273b820618fabc89243fe61b8514ae93] <==
	I0127 13:46:25.981080       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 13:46:25.982305       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 13:48:24.977938       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 13:48:24.978277       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0127 13:48:25.980341       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 13:48:25.980431       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0127 13:48:25.980471       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 13:48:25.980785       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 13:48:25.981711       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 13:48:25.982900       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 13:49:25.982483       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 13:49:25.982630       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0127 13:49:25.983653       1 handler_proxy.go:99] no RequestInfo found in the context
	I0127 13:49:25.983840       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0127 13:49:25.983722       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 13:49:25.985927       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [08725f33f22015afbb4e9b267b2f8f3613d5ec097e94f2964d261efc74bdea31] <==
	E0127 13:46:02.755286       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:46:02.827040       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 13:46:32.762523       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:46:32.835723       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 13:47:02.770373       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:47:02.843857       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 13:47:32.778488       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:47:32.851664       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 13:48:02.785452       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:48:02.862747       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 13:48:32.794635       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:48:32.871792       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 13:49:02.801635       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:49:02.885565       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 13:49:22.679537       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-325431"
	E0127 13:49:32.811400       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:49:32.894581       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 13:49:56.935197       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="112.417µs"
	E0127 13:50:02.819308       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:50:02.903505       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 13:50:04.915538       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="57.797µs"
	I0127 13:50:07.199778       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="64.337µs"
	I0127 13:50:20.211441       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="63.231µs"
	E0127 13:50:32.828450       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:50:32.911645       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [310d5e851b70e308d600ebdd221377ceadf9a6ff38cc099849a8a2506647bcb8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 13:28:35.726645       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 13:28:35.742917       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.50.116"]
	E0127 13:28:35.743010       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 13:28:35.911764       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 13:28:35.911813       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 13:28:35.911838       1 server_linux.go:170] "Using iptables Proxier"
	I0127 13:28:35.939973       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 13:28:35.940350       1 server.go:497] "Version info" version="v1.32.1"
	I0127 13:28:35.940363       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 13:28:35.961605       1 config.go:329] "Starting node config controller"
	I0127 13:28:35.961653       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 13:28:35.967209       1 config.go:199] "Starting service config controller"
	I0127 13:28:35.967284       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 13:28:35.967317       1 config.go:105] "Starting endpoint slice config controller"
	I0127 13:28:35.967321       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 13:28:36.063268       1 shared_informer.go:320] Caches are synced for node config
	I0127 13:28:36.067958       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 13:28:36.068053       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [9c420da9d39eae7f0ea1c575c0892ac22db6c016c9dee10f72698622302c559d] <==
	W0127 13:28:25.855589       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0127 13:28:25.855662       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 13:28:25.868938       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 13:28:25.869011       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 13:28:25.914897       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 13:28:25.915012       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 13:28:25.924285       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 13:28:25.924355       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 13:28:25.949640       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 13:28:25.949727       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 13:28:26.157713       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 13:28:26.157791       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0127 13:28:26.183499       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0127 13:28:26.183597       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 13:28:26.221238       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0127 13:28:26.221291       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 13:28:26.263046       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0127 13:28:26.263165       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 13:28:26.273689       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 13:28:26.273724       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 13:28:26.286357       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0127 13:28:26.286394       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 13:28:26.320437       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0127 13:28:26.320498       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 13:28:29.172627       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 13:49:40 no-preload-325431 kubelet[3493]: E0127 13:49:40.186224    3493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-z7vjh" podUID="f904e246-cad3-4c86-8a01-f8eea49bf563"
	Jan 27 13:49:44 no-preload-325431 kubelet[3493]: I0127 13:49:44.182963    3493 scope.go:117] "RemoveContainer" containerID="75e634727176de37a035b7ccbeb3bacc76576d3f33522560713fd2dd4075a6c6"
	Jan 27 13:49:44 no-preload-325431 kubelet[3493]: E0127 13:49:44.183752    3493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-whltq_kubernetes-dashboard(1b50763d-b860-4f23-92b4-31db0fc0acf2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-whltq" podUID="1b50763d-b860-4f23-92b4-31db0fc0acf2"
	Jan 27 13:49:53 no-preload-325431 kubelet[3493]: E0127 13:49:53.198337    3493 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 27 13:49:53 no-preload-325431 kubelet[3493]: E0127 13:49:53.198713    3493 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 27 13:49:53 no-preload-325431 kubelet[3493]: E0127 13:49:53.199048    3493 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hcjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:
nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdi
n:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-f79f97bbb-z7vjh_kube-system(f904e246-cad3-4c86-8a01-f8eea49bf563): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Jan 27 13:49:53 no-preload-325431 kubelet[3493]: E0127 13:49:53.200612    3493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-z7vjh" podUID="f904e246-cad3-4c86-8a01-f8eea49bf563"
	Jan 27 13:49:56 no-preload-325431 kubelet[3493]: I0127 13:49:56.182486    3493 scope.go:117] "RemoveContainer" containerID="75e634727176de37a035b7ccbeb3bacc76576d3f33522560713fd2dd4075a6c6"
	Jan 27 13:49:56 no-preload-325431 kubelet[3493]: I0127 13:49:56.912016    3493 scope.go:117] "RemoveContainer" containerID="75e634727176de37a035b7ccbeb3bacc76576d3f33522560713fd2dd4075a6c6"
	Jan 27 13:49:56 no-preload-325431 kubelet[3493]: I0127 13:49:56.912438    3493 scope.go:117] "RemoveContainer" containerID="2c59218aeb0b4d6b9b227cf4431b9d542a626a12d9b570a16068eeae269073c0"
	Jan 27 13:49:56 no-preload-325431 kubelet[3493]: E0127 13:49:56.912646    3493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-whltq_kubernetes-dashboard(1b50763d-b860-4f23-92b4-31db0fc0acf2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-whltq" podUID="1b50763d-b860-4f23-92b4-31db0fc0acf2"
	Jan 27 13:50:04 no-preload-325431 kubelet[3493]: I0127 13:50:04.891072    3493 scope.go:117] "RemoveContainer" containerID="2c59218aeb0b4d6b9b227cf4431b9d542a626a12d9b570a16068eeae269073c0"
	Jan 27 13:50:04 no-preload-325431 kubelet[3493]: E0127 13:50:04.891911    3493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-whltq_kubernetes-dashboard(1b50763d-b860-4f23-92b4-31db0fc0acf2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-whltq" podUID="1b50763d-b860-4f23-92b4-31db0fc0acf2"
	Jan 27 13:50:07 no-preload-325431 kubelet[3493]: E0127 13:50:07.183300    3493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-z7vjh" podUID="f904e246-cad3-4c86-8a01-f8eea49bf563"
	Jan 27 13:50:16 no-preload-325431 kubelet[3493]: I0127 13:50:16.185914    3493 scope.go:117] "RemoveContainer" containerID="2c59218aeb0b4d6b9b227cf4431b9d542a626a12d9b570a16068eeae269073c0"
	Jan 27 13:50:16 no-preload-325431 kubelet[3493]: E0127 13:50:16.186200    3493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-whltq_kubernetes-dashboard(1b50763d-b860-4f23-92b4-31db0fc0acf2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-whltq" podUID="1b50763d-b860-4f23-92b4-31db0fc0acf2"
	Jan 27 13:50:20 no-preload-325431 kubelet[3493]: E0127 13:50:20.183545    3493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-z7vjh" podUID="f904e246-cad3-4c86-8a01-f8eea49bf563"
	Jan 27 13:50:28 no-preload-325431 kubelet[3493]: E0127 13:50:28.206472    3493 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 13:50:28 no-preload-325431 kubelet[3493]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 13:50:28 no-preload-325431 kubelet[3493]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 13:50:28 no-preload-325431 kubelet[3493]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 13:50:28 no-preload-325431 kubelet[3493]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 13:50:29 no-preload-325431 kubelet[3493]: I0127 13:50:29.182389    3493 scope.go:117] "RemoveContainer" containerID="2c59218aeb0b4d6b9b227cf4431b9d542a626a12d9b570a16068eeae269073c0"
	Jan 27 13:50:29 no-preload-325431 kubelet[3493]: E0127 13:50:29.182835    3493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-whltq_kubernetes-dashboard(1b50763d-b860-4f23-92b4-31db0fc0acf2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-whltq" podUID="1b50763d-b860-4f23-92b4-31db0fc0acf2"
	Jan 27 13:50:32 no-preload-325431 kubelet[3493]: E0127 13:50:32.183849    3493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-z7vjh" podUID="f904e246-cad3-4c86-8a01-f8eea49bf563"
	
	
	==> kubernetes-dashboard [63d1c3b56e594b09fa04be7e99fd9b3090948c50a1e0413d623d6c1658fa2fbf] <==
	2025/01/27 13:38:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:38:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:39:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:39:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:40:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:40:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:41:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:41:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:42:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:42:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:43:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:43:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:44:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:44:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:45:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:45:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:46:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:46:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:47:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:47:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:48:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:48:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:49:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:49:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:50:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [69d92ad422477870702389d231443715a6ccf0a5f7ffcac6d86ac0f46c9c7a46] <==
	I0127 13:28:35.971454       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 13:28:36.012911       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 13:28:36.012974       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 13:28:36.034583       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 13:28:36.037007       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-325431_af4434bc-f34b-4451-ae25-c663fba38490!
	I0127 13:28:36.038799       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"62559846-3b0f-47fd-992f-23dd8f800587", APIVersion:"v1", ResourceVersion:"397", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-325431_af4434bc-f34b-4451-ae25-c663fba38490 became leader
	I0127 13:28:36.144365       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-325431_af4434bc-f34b-4451-ae25-c663fba38490!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-325431 -n no-preload-325431
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-325431 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-z7vjh
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-325431 describe pod metrics-server-f79f97bbb-z7vjh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-325431 describe pod metrics-server-f79f97bbb-z7vjh: exit status 1 (68.403671ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-z7vjh" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-325431 describe pod metrics-server-f79f97bbb-z7vjh: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (1623.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (1611.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-766944 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 13:24:12.367390  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/custom-flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:24:12.373800  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/custom-flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:24:12.385229  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/custom-flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:24:12.406673  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/custom-flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:24:12.448125  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/custom-flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:24:12.530315  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/custom-flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:24:12.691936  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/custom-flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:24:13.013762  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/custom-flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p embed-certs-766944 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: signal: killed (26m49.512705117s)

                                                
                                                
-- stdout --
	* [embed-certs-766944] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20317
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20317-466901/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-466901/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "embed-certs-766944" primary control-plane node in "embed-certs-766944" cluster
	* Restarting existing kvm2 VM for "embed-certs-766944" ...
	* Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-766944 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 13:24:09.262609  529251 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:24:09.262713  529251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:24:09.262720  529251 out.go:358] Setting ErrFile to fd 2...
	I0127 13:24:09.262725  529251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:24:09.262882  529251 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-466901/.minikube/bin
	I0127 13:24:09.263472  529251 out.go:352] Setting JSON to false
	I0127 13:24:09.264396  529251 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":36346,"bootTime":1737947903,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 13:24:09.264513  529251 start.go:139] virtualization: kvm guest
	I0127 13:24:09.266950  529251 out.go:177] * [embed-certs-766944] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 13:24:09.268526  529251 notify.go:220] Checking for updates...
	I0127 13:24:09.268549  529251 out.go:177]   - MINIKUBE_LOCATION=20317
	I0127 13:24:09.270080  529251 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:24:09.271590  529251 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20317-466901/kubeconfig
	I0127 13:24:09.273033  529251 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-466901/.minikube
	I0127 13:24:09.274452  529251 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 13:24:09.275806  529251 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 13:24:09.277667  529251 config.go:182] Loaded profile config "embed-certs-766944": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 13:24:09.278277  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:24:09.278330  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:24:09.294417  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36361
	I0127 13:24:09.295001  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:24:09.295724  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:24:09.295762  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:24:09.296128  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:24:09.296368  529251 main.go:141] libmachine: (embed-certs-766944) Calling .DriverName
	I0127 13:24:09.296646  529251 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 13:24:09.296956  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:24:09.297002  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:24:09.312316  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44197
	I0127 13:24:09.312849  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:24:09.313384  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:24:09.313407  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:24:09.313731  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:24:09.313943  529251 main.go:141] libmachine: (embed-certs-766944) Calling .DriverName
	I0127 13:24:09.350509  529251 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 13:24:09.351916  529251 start.go:297] selected driver: kvm2
	I0127 13:24:09.351931  529251 start.go:901] validating driver "kvm2" against &{Name:embed-certs-766944 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-766944 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:24:09.352034  529251 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 13:24:09.352757  529251 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:24:09.352858  529251 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20317-466901/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 13:24:09.369424  529251 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 13:24:09.369828  529251 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 13:24:09.369867  529251 cni.go:84] Creating CNI manager for ""
	I0127 13:24:09.369927  529251 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 13:24:09.369962  529251 start.go:340] cluster config:
	{Name:embed-certs-766944 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-766944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-hos
t Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:24:09.370070  529251 iso.go:125] acquiring lock: {Name:mkcc3db98c9d4661e75c49bd9b203d0232dff8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:24:09.372205  529251 out.go:177] * Starting "embed-certs-766944" primary control-plane node in "embed-certs-766944" cluster
	I0127 13:24:09.373636  529251 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 13:24:09.373684  529251 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20317-466901/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
	I0127 13:24:09.373695  529251 cache.go:56] Caching tarball of preloaded images
	I0127 13:24:09.373794  529251 preload.go:172] Found /home/jenkins/minikube-integration/20317-466901/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 13:24:09.373805  529251 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
	I0127 13:24:09.373911  529251 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/embed-certs-766944/config.json ...
	I0127 13:24:09.374093  529251 start.go:360] acquireMachinesLock for embed-certs-766944: {Name:mke115b779db52cb0a5f0a05f83d5bad0a35c561 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 13:24:09.374145  529251 start.go:364] duration metric: took 33.056µs to acquireMachinesLock for "embed-certs-766944"
	I0127 13:24:09.374160  529251 start.go:96] Skipping create...Using existing machine configuration
	I0127 13:24:09.374168  529251 fix.go:54] fixHost starting: 
	I0127 13:24:09.374440  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:24:09.374479  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:24:09.389945  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33299
	I0127 13:24:09.390370  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:24:09.390833  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:24:09.390856  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:24:09.391242  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:24:09.391521  529251 main.go:141] libmachine: (embed-certs-766944) Calling .DriverName
	I0127 13:24:09.391680  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetState
	I0127 13:24:09.393355  529251 fix.go:112] recreateIfNeeded on embed-certs-766944: state=Stopped err=<nil>
	I0127 13:24:09.393392  529251 main.go:141] libmachine: (embed-certs-766944) Calling .DriverName
	W0127 13:24:09.393543  529251 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 13:24:09.395698  529251 out.go:177] * Restarting existing kvm2 VM for "embed-certs-766944" ...
	I0127 13:24:09.396996  529251 main.go:141] libmachine: (embed-certs-766944) Calling .Start
	I0127 13:24:09.397213  529251 main.go:141] libmachine: (embed-certs-766944) starting domain...
	I0127 13:24:09.397228  529251 main.go:141] libmachine: (embed-certs-766944) ensuring networks are active...
	I0127 13:24:09.398171  529251 main.go:141] libmachine: (embed-certs-766944) Ensuring network default is active
	I0127 13:24:09.398500  529251 main.go:141] libmachine: (embed-certs-766944) Ensuring network mk-embed-certs-766944 is active
	I0127 13:24:09.398816  529251 main.go:141] libmachine: (embed-certs-766944) getting domain XML...
	I0127 13:24:09.399535  529251 main.go:141] libmachine: (embed-certs-766944) creating domain...
	I0127 13:24:10.621045  529251 main.go:141] libmachine: (embed-certs-766944) waiting for IP...
	I0127 13:24:10.621913  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:10.622331  529251 main.go:141] libmachine: (embed-certs-766944) DBG | unable to find current IP address of domain embed-certs-766944 in network mk-embed-certs-766944
	I0127 13:24:10.622419  529251 main.go:141] libmachine: (embed-certs-766944) DBG | I0127 13:24:10.622327  529286 retry.go:31] will retry after 247.522783ms: waiting for domain to come up
	I0127 13:24:10.871919  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:10.872449  529251 main.go:141] libmachine: (embed-certs-766944) DBG | unable to find current IP address of domain embed-certs-766944 in network mk-embed-certs-766944
	I0127 13:24:10.872480  529251 main.go:141] libmachine: (embed-certs-766944) DBG | I0127 13:24:10.872406  529286 retry.go:31] will retry after 358.355456ms: waiting for domain to come up
	I0127 13:24:11.232273  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:11.232841  529251 main.go:141] libmachine: (embed-certs-766944) DBG | unable to find current IP address of domain embed-certs-766944 in network mk-embed-certs-766944
	I0127 13:24:11.232873  529251 main.go:141] libmachine: (embed-certs-766944) DBG | I0127 13:24:11.232802  529286 retry.go:31] will retry after 432.268946ms: waiting for domain to come up
	I0127 13:24:11.666412  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:11.666849  529251 main.go:141] libmachine: (embed-certs-766944) DBG | unable to find current IP address of domain embed-certs-766944 in network mk-embed-certs-766944
	I0127 13:24:11.666883  529251 main.go:141] libmachine: (embed-certs-766944) DBG | I0127 13:24:11.666815  529286 retry.go:31] will retry after 550.90467ms: waiting for domain to come up
	I0127 13:24:12.219234  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:12.219718  529251 main.go:141] libmachine: (embed-certs-766944) DBG | unable to find current IP address of domain embed-certs-766944 in network mk-embed-certs-766944
	I0127 13:24:12.219750  529251 main.go:141] libmachine: (embed-certs-766944) DBG | I0127 13:24:12.219668  529286 retry.go:31] will retry after 705.916295ms: waiting for domain to come up
	I0127 13:24:13.002158  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:13.002618  529251 main.go:141] libmachine: (embed-certs-766944) DBG | unable to find current IP address of domain embed-certs-766944 in network mk-embed-certs-766944
	I0127 13:24:13.002646  529251 main.go:141] libmachine: (embed-certs-766944) DBG | I0127 13:24:13.002540  529286 retry.go:31] will retry after 610.457859ms: waiting for domain to come up
	I0127 13:24:13.614274  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:13.614836  529251 main.go:141] libmachine: (embed-certs-766944) DBG | unable to find current IP address of domain embed-certs-766944 in network mk-embed-certs-766944
	I0127 13:24:13.614901  529251 main.go:141] libmachine: (embed-certs-766944) DBG | I0127 13:24:13.614814  529286 retry.go:31] will retry after 1.131056544s: waiting for domain to come up
	I0127 13:24:14.747534  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:14.748082  529251 main.go:141] libmachine: (embed-certs-766944) DBG | unable to find current IP address of domain embed-certs-766944 in network mk-embed-certs-766944
	I0127 13:24:14.748132  529251 main.go:141] libmachine: (embed-certs-766944) DBG | I0127 13:24:14.748050  529286 retry.go:31] will retry after 1.227103617s: waiting for domain to come up
	I0127 13:24:15.977514  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:15.977953  529251 main.go:141] libmachine: (embed-certs-766944) DBG | unable to find current IP address of domain embed-certs-766944 in network mk-embed-certs-766944
	I0127 13:24:15.977985  529251 main.go:141] libmachine: (embed-certs-766944) DBG | I0127 13:24:15.977935  529286 retry.go:31] will retry after 1.457777664s: waiting for domain to come up
	I0127 13:24:17.437639  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:17.438107  529251 main.go:141] libmachine: (embed-certs-766944) DBG | unable to find current IP address of domain embed-certs-766944 in network mk-embed-certs-766944
	I0127 13:24:17.438143  529251 main.go:141] libmachine: (embed-certs-766944) DBG | I0127 13:24:17.438063  529286 retry.go:31] will retry after 1.994601377s: waiting for domain to come up
	I0127 13:24:19.435011  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:19.435626  529251 main.go:141] libmachine: (embed-certs-766944) DBG | unable to find current IP address of domain embed-certs-766944 in network mk-embed-certs-766944
	I0127 13:24:19.435656  529251 main.go:141] libmachine: (embed-certs-766944) DBG | I0127 13:24:19.435602  529286 retry.go:31] will retry after 1.936193867s: waiting for domain to come up
	I0127 13:24:21.374694  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:21.375288  529251 main.go:141] libmachine: (embed-certs-766944) DBG | unable to find current IP address of domain embed-certs-766944 in network mk-embed-certs-766944
	I0127 13:24:21.375310  529251 main.go:141] libmachine: (embed-certs-766944) DBG | I0127 13:24:21.375232  529286 retry.go:31] will retry after 2.349135377s: waiting for domain to come up
	I0127 13:24:23.727716  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:23.728464  529251 main.go:141] libmachine: (embed-certs-766944) DBG | unable to find current IP address of domain embed-certs-766944 in network mk-embed-certs-766944
	I0127 13:24:23.728493  529251 main.go:141] libmachine: (embed-certs-766944) DBG | I0127 13:24:23.728405  529286 retry.go:31] will retry after 3.564526831s: waiting for domain to come up
	I0127 13:24:27.296717  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:27.297242  529251 main.go:141] libmachine: (embed-certs-766944) found domain IP: 192.168.39.24
	I0127 13:24:27.297263  529251 main.go:141] libmachine: (embed-certs-766944) reserving static IP address...
	I0127 13:24:27.297272  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has current primary IP address 192.168.39.24 and MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:27.297770  529251 main.go:141] libmachine: (embed-certs-766944) DBG | found host DHCP lease matching {name: "embed-certs-766944", mac: "52:54:00:73:4a:1b", ip: "192.168.39.24"} in network mk-embed-certs-766944: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:21 +0000 UTC Type:0 Mac:52:54:00:73:4a:1b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:embed-certs-766944 Clientid:01:52:54:00:73:4a:1b}
	I0127 13:24:27.297794  529251 main.go:141] libmachine: (embed-certs-766944) reserved static IP address 192.168.39.24 for domain embed-certs-766944
	I0127 13:24:27.297821  529251 main.go:141] libmachine: (embed-certs-766944) DBG | skip adding static IP to network mk-embed-certs-766944 - found existing host DHCP lease matching {name: "embed-certs-766944", mac: "52:54:00:73:4a:1b", ip: "192.168.39.24"}
	I0127 13:24:27.297837  529251 main.go:141] libmachine: (embed-certs-766944) DBG | Getting to WaitForSSH function...
	I0127 13:24:27.297848  529251 main.go:141] libmachine: (embed-certs-766944) waiting for SSH...
	I0127 13:24:27.299950  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:27.300320  529251 main.go:141] libmachine: (embed-certs-766944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4a:1b", ip: ""} in network mk-embed-certs-766944: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:21 +0000 UTC Type:0 Mac:52:54:00:73:4a:1b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:embed-certs-766944 Clientid:01:52:54:00:73:4a:1b}
	I0127 13:24:27.300349  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined IP address 192.168.39.24 and MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:27.300490  529251 main.go:141] libmachine: (embed-certs-766944) DBG | Using SSH client type: external
	I0127 13:24:27.300546  529251 main.go:141] libmachine: (embed-certs-766944) DBG | Using SSH private key: /home/jenkins/minikube-integration/20317-466901/.minikube/machines/embed-certs-766944/id_rsa (-rw-------)
	I0127 13:24:27.300574  529251 main.go:141] libmachine: (embed-certs-766944) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.24 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20317-466901/.minikube/machines/embed-certs-766944/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 13:24:27.300585  529251 main.go:141] libmachine: (embed-certs-766944) DBG | About to run SSH command:
	I0127 13:24:27.300594  529251 main.go:141] libmachine: (embed-certs-766944) DBG | exit 0
	I0127 13:24:27.423663  529251 main.go:141] libmachine: (embed-certs-766944) DBG | SSH cmd err, output: <nil>: 
	I0127 13:24:27.424128  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetConfigRaw
	I0127 13:24:27.424852  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetIP
	I0127 13:24:27.427374  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:27.427795  529251 main.go:141] libmachine: (embed-certs-766944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4a:1b", ip: ""} in network mk-embed-certs-766944: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:21 +0000 UTC Type:0 Mac:52:54:00:73:4a:1b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:embed-certs-766944 Clientid:01:52:54:00:73:4a:1b}
	I0127 13:24:27.427836  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined IP address 192.168.39.24 and MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:27.428151  529251 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/embed-certs-766944/config.json ...
	I0127 13:24:27.428380  529251 machine.go:93] provisionDockerMachine start ...
	I0127 13:24:27.428401  529251 main.go:141] libmachine: (embed-certs-766944) Calling .DriverName
	I0127 13:24:27.428581  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHHostname
	I0127 13:24:27.431209  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:27.431609  529251 main.go:141] libmachine: (embed-certs-766944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4a:1b", ip: ""} in network mk-embed-certs-766944: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:21 +0000 UTC Type:0 Mac:52:54:00:73:4a:1b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:embed-certs-766944 Clientid:01:52:54:00:73:4a:1b}
	I0127 13:24:27.431632  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined IP address 192.168.39.24 and MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:27.431861  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHPort
	I0127 13:24:27.432087  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHKeyPath
	I0127 13:24:27.432276  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHKeyPath
	I0127 13:24:27.432461  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHUsername
	I0127 13:24:27.432613  529251 main.go:141] libmachine: Using SSH client type: native
	I0127 13:24:27.432852  529251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0127 13:24:27.432868  529251 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 13:24:27.540028  529251 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 13:24:27.540059  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetMachineName
	I0127 13:24:27.540305  529251 buildroot.go:166] provisioning hostname "embed-certs-766944"
	I0127 13:24:27.540320  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetMachineName
	I0127 13:24:27.540548  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHHostname
	I0127 13:24:27.543135  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:27.543544  529251 main.go:141] libmachine: (embed-certs-766944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4a:1b", ip: ""} in network mk-embed-certs-766944: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:21 +0000 UTC Type:0 Mac:52:54:00:73:4a:1b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:embed-certs-766944 Clientid:01:52:54:00:73:4a:1b}
	I0127 13:24:27.543570  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined IP address 192.168.39.24 and MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:27.543746  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHPort
	I0127 13:24:27.543925  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHKeyPath
	I0127 13:24:27.544109  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHKeyPath
	I0127 13:24:27.544259  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHUsername
	I0127 13:24:27.544422  529251 main.go:141] libmachine: Using SSH client type: native
	I0127 13:24:27.544643  529251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0127 13:24:27.544664  529251 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-766944 && echo "embed-certs-766944" | sudo tee /etc/hostname
	I0127 13:24:27.672206  529251 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-766944
	
	I0127 13:24:27.672245  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHHostname
	I0127 13:24:27.674979  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:27.675419  529251 main.go:141] libmachine: (embed-certs-766944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4a:1b", ip: ""} in network mk-embed-certs-766944: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:21 +0000 UTC Type:0 Mac:52:54:00:73:4a:1b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:embed-certs-766944 Clientid:01:52:54:00:73:4a:1b}
	I0127 13:24:27.675449  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined IP address 192.168.39.24 and MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:27.675690  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHPort
	I0127 13:24:27.675905  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHKeyPath
	I0127 13:24:27.676140  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHKeyPath
	I0127 13:24:27.676309  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHUsername
	I0127 13:24:27.676474  529251 main.go:141] libmachine: Using SSH client type: native
	I0127 13:24:27.676681  529251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0127 13:24:27.676697  529251 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-766944' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-766944/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-766944' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 13:24:27.803409  529251 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 13:24:27.803442  529251 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20317-466901/.minikube CaCertPath:/home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20317-466901/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20317-466901/.minikube}
	I0127 13:24:27.803479  529251 buildroot.go:174] setting up certificates
	I0127 13:24:27.803493  529251 provision.go:84] configureAuth start
	I0127 13:24:27.803504  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetMachineName
	I0127 13:24:27.803796  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetIP
	I0127 13:24:27.806225  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:27.806534  529251 main.go:141] libmachine: (embed-certs-766944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4a:1b", ip: ""} in network mk-embed-certs-766944: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:21 +0000 UTC Type:0 Mac:52:54:00:73:4a:1b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:embed-certs-766944 Clientid:01:52:54:00:73:4a:1b}
	I0127 13:24:27.806560  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined IP address 192.168.39.24 and MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:27.806759  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHHostname
	I0127 13:24:27.809171  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:27.809567  529251 main.go:141] libmachine: (embed-certs-766944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4a:1b", ip: ""} in network mk-embed-certs-766944: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:21 +0000 UTC Type:0 Mac:52:54:00:73:4a:1b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:embed-certs-766944 Clientid:01:52:54:00:73:4a:1b}
	I0127 13:24:27.809598  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined IP address 192.168.39.24 and MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:27.809730  529251 provision.go:143] copyHostCerts
	I0127 13:24:27.809787  529251 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-466901/.minikube/ca.pem, removing ...
	I0127 13:24:27.809799  529251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-466901/.minikube/ca.pem
	I0127 13:24:27.809876  529251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20317-466901/.minikube/ca.pem (1082 bytes)
	I0127 13:24:27.810025  529251 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-466901/.minikube/cert.pem, removing ...
	I0127 13:24:27.810037  529251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-466901/.minikube/cert.pem
	I0127 13:24:27.810072  529251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20317-466901/.minikube/cert.pem (1123 bytes)
	I0127 13:24:27.810164  529251 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-466901/.minikube/key.pem, removing ...
	I0127 13:24:27.810176  529251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-466901/.minikube/key.pem
	I0127 13:24:27.810209  529251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20317-466901/.minikube/key.pem (1675 bytes)
	I0127 13:24:27.810293  529251 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20317-466901/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca-key.pem org=jenkins.embed-certs-766944 san=[127.0.0.1 192.168.39.24 embed-certs-766944 localhost minikube]
	I0127 13:24:28.001118  529251 provision.go:177] copyRemoteCerts
	I0127 13:24:28.001188  529251 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 13:24:28.001230  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHHostname
	I0127 13:24:28.003960  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:28.004248  529251 main.go:141] libmachine: (embed-certs-766944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4a:1b", ip: ""} in network mk-embed-certs-766944: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:21 +0000 UTC Type:0 Mac:52:54:00:73:4a:1b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:embed-certs-766944 Clientid:01:52:54:00:73:4a:1b}
	I0127 13:24:28.004281  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined IP address 192.168.39.24 and MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:28.004447  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHPort
	I0127 13:24:28.004640  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHKeyPath
	I0127 13:24:28.004818  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHUsername
	I0127 13:24:28.004909  529251 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/embed-certs-766944/id_rsa Username:docker}
	I0127 13:24:28.085722  529251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 13:24:28.110548  529251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0127 13:24:28.139179  529251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0127 13:24:28.167410  529251 provision.go:87] duration metric: took 363.900177ms to configureAuth
	I0127 13:24:28.167445  529251 buildroot.go:189] setting minikube options for container-runtime
	I0127 13:24:28.167658  529251 config.go:182] Loaded profile config "embed-certs-766944": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 13:24:28.167672  529251 machine.go:96] duration metric: took 739.278663ms to provisionDockerMachine
	I0127 13:24:28.167684  529251 start.go:293] postStartSetup for "embed-certs-766944" (driver="kvm2")
	I0127 13:24:28.167698  529251 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 13:24:28.167783  529251 main.go:141] libmachine: (embed-certs-766944) Calling .DriverName
	I0127 13:24:28.168140  529251 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 13:24:28.168178  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHHostname
	I0127 13:24:28.170803  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:28.171199  529251 main.go:141] libmachine: (embed-certs-766944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4a:1b", ip: ""} in network mk-embed-certs-766944: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:21 +0000 UTC Type:0 Mac:52:54:00:73:4a:1b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:embed-certs-766944 Clientid:01:52:54:00:73:4a:1b}
	I0127 13:24:28.171234  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined IP address 192.168.39.24 and MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:28.171407  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHPort
	I0127 13:24:28.171600  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHKeyPath
	I0127 13:24:28.171793  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHUsername
	I0127 13:24:28.171951  529251 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/embed-certs-766944/id_rsa Username:docker}
	I0127 13:24:28.254004  529251 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 13:24:28.258504  529251 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 13:24:28.258536  529251 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-466901/.minikube/addons for local assets ...
	I0127 13:24:28.258619  529251 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-466901/.minikube/files for local assets ...
	I0127 13:24:28.258729  529251 filesync.go:149] local asset: /home/jenkins/minikube-integration/20317-466901/.minikube/files/etc/ssl/certs/4742752.pem -> 4742752.pem in /etc/ssl/certs
	I0127 13:24:28.258830  529251 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 13:24:28.268722  529251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/files/etc/ssl/certs/4742752.pem --> /etc/ssl/certs/4742752.pem (1708 bytes)
	I0127 13:24:28.297707  529251 start.go:296] duration metric: took 130.006509ms for postStartSetup
	I0127 13:24:28.297754  529251 fix.go:56] duration metric: took 18.923585624s for fixHost
	I0127 13:24:28.297776  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHHostname
	I0127 13:24:28.300676  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:28.301034  529251 main.go:141] libmachine: (embed-certs-766944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4a:1b", ip: ""} in network mk-embed-certs-766944: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:21 +0000 UTC Type:0 Mac:52:54:00:73:4a:1b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:embed-certs-766944 Clientid:01:52:54:00:73:4a:1b}
	I0127 13:24:28.301056  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined IP address 192.168.39.24 and MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:28.301250  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHPort
	I0127 13:24:28.301465  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHKeyPath
	I0127 13:24:28.301622  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHKeyPath
	I0127 13:24:28.301745  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHUsername
	I0127 13:24:28.301905  529251 main.go:141] libmachine: Using SSH client type: native
	I0127 13:24:28.302080  529251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0127 13:24:28.302090  529251 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 13:24:28.404465  529251 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737984268.371815476
	
	I0127 13:24:28.404492  529251 fix.go:216] guest clock: 1737984268.371815476
	I0127 13:24:28.404503  529251 fix.go:229] Guest: 2025-01-27 13:24:28.371815476 +0000 UTC Remote: 2025-01-27 13:24:28.297758067 +0000 UTC m=+19.074278058 (delta=74.057409ms)
	I0127 13:24:28.404559  529251 fix.go:200] guest clock delta is within tolerance: 74.057409ms
	I0127 13:24:28.404570  529251 start.go:83] releasing machines lock for "embed-certs-766944", held for 19.030413614s
	I0127 13:24:28.404607  529251 main.go:141] libmachine: (embed-certs-766944) Calling .DriverName
	I0127 13:24:28.404880  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetIP
	I0127 13:24:28.407634  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:28.407977  529251 main.go:141] libmachine: (embed-certs-766944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4a:1b", ip: ""} in network mk-embed-certs-766944: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:21 +0000 UTC Type:0 Mac:52:54:00:73:4a:1b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:embed-certs-766944 Clientid:01:52:54:00:73:4a:1b}
	I0127 13:24:28.408005  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined IP address 192.168.39.24 and MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:28.408235  529251 main.go:141] libmachine: (embed-certs-766944) Calling .DriverName
	I0127 13:24:28.408756  529251 main.go:141] libmachine: (embed-certs-766944) Calling .DriverName
	I0127 13:24:28.408945  529251 main.go:141] libmachine: (embed-certs-766944) Calling .DriverName
	I0127 13:24:28.409027  529251 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 13:24:28.409084  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHHostname
	I0127 13:24:28.409205  529251 ssh_runner.go:195] Run: cat /version.json
	I0127 13:24:28.409239  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHHostname
	I0127 13:24:28.411942  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:28.412088  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:28.412354  529251 main.go:141] libmachine: (embed-certs-766944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4a:1b", ip: ""} in network mk-embed-certs-766944: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:21 +0000 UTC Type:0 Mac:52:54:00:73:4a:1b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:embed-certs-766944 Clientid:01:52:54:00:73:4a:1b}
	I0127 13:24:28.412382  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined IP address 192.168.39.24 and MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:28.412407  529251 main.go:141] libmachine: (embed-certs-766944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4a:1b", ip: ""} in network mk-embed-certs-766944: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:21 +0000 UTC Type:0 Mac:52:54:00:73:4a:1b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:embed-certs-766944 Clientid:01:52:54:00:73:4a:1b}
	I0127 13:24:28.412420  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined IP address 192.168.39.24 and MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:28.412511  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHPort
	I0127 13:24:28.412687  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHPort
	I0127 13:24:28.412700  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHKeyPath
	I0127 13:24:28.412813  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHKeyPath
	I0127 13:24:28.412941  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHUsername
	I0127 13:24:28.412997  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHUsername
	I0127 13:24:28.413093  529251 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/embed-certs-766944/id_rsa Username:docker}
	I0127 13:24:28.413139  529251 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/embed-certs-766944/id_rsa Username:docker}
	I0127 13:24:28.517048  529251 ssh_runner.go:195] Run: systemctl --version
	I0127 13:24:28.523918  529251 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 13:24:28.530122  529251 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 13:24:28.530218  529251 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 13:24:28.548004  529251 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 13:24:28.548039  529251 start.go:495] detecting cgroup driver to use...
	I0127 13:24:28.548130  529251 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 13:24:28.583244  529251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 13:24:28.598938  529251 docker.go:217] disabling cri-docker service (if available) ...
	I0127 13:24:28.598997  529251 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 13:24:28.614340  529251 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 13:24:28.629773  529251 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 13:24:28.745775  529251 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 13:24:28.910636  529251 docker.go:233] disabling docker service ...
	I0127 13:24:28.910707  529251 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 13:24:28.927158  529251 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 13:24:28.942719  529251 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 13:24:29.093401  529251 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 13:24:29.228775  529251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 13:24:29.245850  529251 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 13:24:29.266881  529251 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 13:24:29.278449  529251 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 13:24:29.289902  529251 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 13:24:29.289991  529251 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 13:24:29.301639  529251 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 13:24:29.313024  529251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 13:24:29.324390  529251 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 13:24:29.336239  529251 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 13:24:29.347871  529251 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 13:24:29.359316  529251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 13:24:29.371211  529251 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 13:24:29.385321  529251 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 13:24:29.398594  529251 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 13:24:29.398662  529251 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 13:24:29.413447  529251 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 13:24:29.424369  529251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:24:29.561908  529251 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 13:24:29.594122  529251 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0127 13:24:29.594247  529251 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 13:24:29.599726  529251 retry.go:31] will retry after 1.028861348s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0127 13:24:30.628962  529251 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 13:24:30.634633  529251 start.go:563] Will wait 60s for crictl version
	I0127 13:24:30.634709  529251 ssh_runner.go:195] Run: which crictl
	I0127 13:24:30.638920  529251 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 13:24:30.692323  529251 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0127 13:24:30.692406  529251 ssh_runner.go:195] Run: containerd --version
	I0127 13:24:30.723312  529251 ssh_runner.go:195] Run: containerd --version
	I0127 13:24:30.754067  529251 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	I0127 13:24:30.755389  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetIP
	I0127 13:24:30.758794  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:30.759218  529251 main.go:141] libmachine: (embed-certs-766944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4a:1b", ip: ""} in network mk-embed-certs-766944: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:21 +0000 UTC Type:0 Mac:52:54:00:73:4a:1b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:embed-certs-766944 Clientid:01:52:54:00:73:4a:1b}
	I0127 13:24:30.759243  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined IP address 192.168.39.24 and MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:24:30.759518  529251 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0127 13:24:30.764246  529251 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:24:30.779628  529251 kubeadm.go:883] updating cluster {Name:embed-certs-766944 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-766944 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 13:24:30.779804  529251 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 13:24:30.779885  529251 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:24:30.814751  529251 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 13:24:30.814782  529251 containerd.go:534] Images already preloaded, skipping extraction
	I0127 13:24:30.814841  529251 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:24:30.850348  529251 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 13:24:30.850373  529251 cache_images.go:84] Images are preloaded, skipping loading
	I0127 13:24:30.850382  529251 kubeadm.go:934] updating node { 192.168.39.24 8443 v1.32.1 containerd true true} ...
	I0127 13:24:30.850495  529251 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-766944 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.24
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:embed-certs-766944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 13:24:30.850552  529251 ssh_runner.go:195] Run: sudo crictl info
	I0127 13:24:30.906952  529251 cni.go:84] Creating CNI manager for ""
	I0127 13:24:30.906978  529251 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 13:24:30.906988  529251 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 13:24:30.907009  529251 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.24 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-766944 NodeName:embed-certs-766944 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.24"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.24 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 13:24:30.907118  529251 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.24
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-766944"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.24"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.24"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 13:24:30.907185  529251 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 13:24:30.919697  529251 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 13:24:30.919792  529251 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 13:24:30.931533  529251 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0127 13:24:30.949820  529251 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 13:24:30.968094  529251 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2311 bytes)
	I0127 13:24:30.990016  529251 ssh_runner.go:195] Run: grep 192.168.39.24	control-plane.minikube.internal$ /etc/hosts
	I0127 13:24:30.994380  529251 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.24	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:24:31.008869  529251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:24:31.139330  529251 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:24:31.159113  529251 certs.go:68] Setting up /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/embed-certs-766944 for IP: 192.168.39.24
	I0127 13:24:31.159140  529251 certs.go:194] generating shared ca certs ...
	I0127 13:24:31.159170  529251 certs.go:226] acquiring lock for ca certs: {Name:mk60f2aac78eb363c5e06a00675357d94c0df88d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:24:31.159492  529251 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20317-466901/.minikube/ca.key
	I0127 13:24:31.159577  529251 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20317-466901/.minikube/proxy-client-ca.key
	I0127 13:24:31.159595  529251 certs.go:256] generating profile certs ...
	I0127 13:24:31.159732  529251 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/embed-certs-766944/client.key
	I0127 13:24:31.159827  529251 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/embed-certs-766944/apiserver.key.ff2ed29a
	I0127 13:24:31.159898  529251 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/embed-certs-766944/proxy-client.key
	I0127 13:24:31.160049  529251 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/474275.pem (1338 bytes)
	W0127 13:24:31.160101  529251 certs.go:480] ignoring /home/jenkins/minikube-integration/20317-466901/.minikube/certs/474275_empty.pem, impossibly tiny 0 bytes
	I0127 13:24:31.160122  529251 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 13:24:31.160163  529251 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem (1082 bytes)
	I0127 13:24:31.160199  529251 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/cert.pem (1123 bytes)
	I0127 13:24:31.160230  529251 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/key.pem (1675 bytes)
	I0127 13:24:31.160301  529251 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/files/etc/ssl/certs/4742752.pem (1708 bytes)
	I0127 13:24:31.161158  529251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 13:24:31.199191  529251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 13:24:31.228574  529251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 13:24:31.262976  529251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 13:24:31.304564  529251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/embed-certs-766944/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0127 13:24:31.334614  529251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/embed-certs-766944/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 13:24:31.366797  529251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/embed-certs-766944/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 13:24:31.404975  529251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/embed-certs-766944/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 13:24:31.435897  529251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 13:24:31.464697  529251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/certs/474275.pem --> /usr/share/ca-certificates/474275.pem (1338 bytes)
	I0127 13:24:31.496118  529251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/files/etc/ssl/certs/4742752.pem --> /usr/share/ca-certificates/4742752.pem (1708 bytes)
	I0127 13:24:31.524110  529251 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 13:24:31.543341  529251 ssh_runner.go:195] Run: openssl version
	I0127 13:24:31.550197  529251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 13:24:31.564436  529251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:24:31.570109  529251 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 12:10 /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:24:31.570191  529251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:24:31.576873  529251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 13:24:31.590512  529251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/474275.pem && ln -fs /usr/share/ca-certificates/474275.pem /etc/ssl/certs/474275.pem"
	I0127 13:24:31.603957  529251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/474275.pem
	I0127 13:24:31.611535  529251 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 12:18 /usr/share/ca-certificates/474275.pem
	I0127 13:24:31.611605  529251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/474275.pem
	I0127 13:24:31.618623  529251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/474275.pem /etc/ssl/certs/51391683.0"
	I0127 13:24:31.630936  529251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4742752.pem && ln -fs /usr/share/ca-certificates/4742752.pem /etc/ssl/certs/4742752.pem"
	I0127 13:24:31.644005  529251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4742752.pem
	I0127 13:24:31.649337  529251 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 12:18 /usr/share/ca-certificates/4742752.pem
	I0127 13:24:31.649420  529251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4742752.pem
	I0127 13:24:31.655784  529251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4742752.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 13:24:31.669323  529251 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 13:24:31.674734  529251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 13:24:31.681495  529251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 13:24:31.688512  529251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 13:24:31.695401  529251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 13:24:31.701886  529251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 13:24:31.708354  529251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 13:24:31.714964  529251 kubeadm.go:392] StartCluster: {Name:embed-certs-766944 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-766944 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:24:31.715078  529251 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0127 13:24:31.715172  529251 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:24:31.755435  529251 cri.go:89] found id: "cbec54c50f03c72f0f70d780ed3905ce74f237ac03186583c058438d47471288"
	I0127 13:24:31.755466  529251 cri.go:89] found id: "d36334004591ffc4e19457a479ddbe9f84e5dccdb9e025feb66ace04a706772c"
	I0127 13:24:31.755472  529251 cri.go:89] found id: "b572bf444df0e92be60ef5d740974218987e4e200cd78f9a83d76e17524eec9d"
	I0127 13:24:31.755477  529251 cri.go:89] found id: "489f669ef39af65715e6a2e8c5caa51fcc700fc6be28fcd1101a02d74922a07d"
	I0127 13:24:31.755481  529251 cri.go:89] found id: "2b348fe4f4e4415899e480b743617cef32a049c912b282dcdfe213f797ddfc22"
	I0127 13:24:31.755486  529251 cri.go:89] found id: "82346ca186db3c10c885cbb3dc86f49e45d6e235afa7c23fee5603383f42aef6"
	I0127 13:24:31.755490  529251 cri.go:89] found id: "08f4e1af5fe0ff58485c1ded2944f3e95440cffd95af3d2d2d9d402172ffe303"
	I0127 13:24:31.755494  529251 cri.go:89] found id: "48ebd0dfb56e4b786f47861bbd718ea730f3212e22730942062347c578b77328"
	I0127 13:24:31.755497  529251 cri.go:89] found id: ""
	I0127 13:24:31.755569  529251 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0127 13:24:31.773610  529251 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-27T13:24:31Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0127 13:24:31.773721  529251 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 13:24:31.786261  529251 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 13:24:31.786294  529251 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 13:24:31.786349  529251 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 13:24:31.798696  529251 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 13:24:31.799475  529251 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-766944" does not appear in /home/jenkins/minikube-integration/20317-466901/kubeconfig
	I0127 13:24:31.799833  529251 kubeconfig.go:62] /home/jenkins/minikube-integration/20317-466901/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-766944" cluster setting kubeconfig missing "embed-certs-766944" context setting]
	I0127 13:24:31.800511  529251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-466901/kubeconfig: {Name:mkc116eec378af43ea8fefe45e11af3e19be85bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:24:31.802011  529251 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 13:24:31.813230  529251 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.24
	I0127 13:24:31.813280  529251 kubeadm.go:1160] stopping kube-system containers ...
	I0127 13:24:31.813300  529251 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0127 13:24:31.813367  529251 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:24:31.854265  529251 cri.go:89] found id: "cbec54c50f03c72f0f70d780ed3905ce74f237ac03186583c058438d47471288"
	I0127 13:24:31.854293  529251 cri.go:89] found id: "d36334004591ffc4e19457a479ddbe9f84e5dccdb9e025feb66ace04a706772c"
	I0127 13:24:31.854297  529251 cri.go:89] found id: "b572bf444df0e92be60ef5d740974218987e4e200cd78f9a83d76e17524eec9d"
	I0127 13:24:31.854300  529251 cri.go:89] found id: "489f669ef39af65715e6a2e8c5caa51fcc700fc6be28fcd1101a02d74922a07d"
	I0127 13:24:31.854303  529251 cri.go:89] found id: "2b348fe4f4e4415899e480b743617cef32a049c912b282dcdfe213f797ddfc22"
	I0127 13:24:31.854307  529251 cri.go:89] found id: "82346ca186db3c10c885cbb3dc86f49e45d6e235afa7c23fee5603383f42aef6"
	I0127 13:24:31.854309  529251 cri.go:89] found id: "08f4e1af5fe0ff58485c1ded2944f3e95440cffd95af3d2d2d9d402172ffe303"
	I0127 13:24:31.854312  529251 cri.go:89] found id: "48ebd0dfb56e4b786f47861bbd718ea730f3212e22730942062347c578b77328"
	I0127 13:24:31.854314  529251 cri.go:89] found id: ""
	I0127 13:24:31.854319  529251 cri.go:252] Stopping containers: [cbec54c50f03c72f0f70d780ed3905ce74f237ac03186583c058438d47471288 d36334004591ffc4e19457a479ddbe9f84e5dccdb9e025feb66ace04a706772c b572bf444df0e92be60ef5d740974218987e4e200cd78f9a83d76e17524eec9d 489f669ef39af65715e6a2e8c5caa51fcc700fc6be28fcd1101a02d74922a07d 2b348fe4f4e4415899e480b743617cef32a049c912b282dcdfe213f797ddfc22 82346ca186db3c10c885cbb3dc86f49e45d6e235afa7c23fee5603383f42aef6 08f4e1af5fe0ff58485c1ded2944f3e95440cffd95af3d2d2d9d402172ffe303 48ebd0dfb56e4b786f47861bbd718ea730f3212e22730942062347c578b77328]
	I0127 13:24:31.854387  529251 ssh_runner.go:195] Run: which crictl
	I0127 13:24:31.858798  529251 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 cbec54c50f03c72f0f70d780ed3905ce74f237ac03186583c058438d47471288 d36334004591ffc4e19457a479ddbe9f84e5dccdb9e025feb66ace04a706772c b572bf444df0e92be60ef5d740974218987e4e200cd78f9a83d76e17524eec9d 489f669ef39af65715e6a2e8c5caa51fcc700fc6be28fcd1101a02d74922a07d 2b348fe4f4e4415899e480b743617cef32a049c912b282dcdfe213f797ddfc22 82346ca186db3c10c885cbb3dc86f49e45d6e235afa7c23fee5603383f42aef6 08f4e1af5fe0ff58485c1ded2944f3e95440cffd95af3d2d2d9d402172ffe303 48ebd0dfb56e4b786f47861bbd718ea730f3212e22730942062347c578b77328
	I0127 13:24:31.898871  529251 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 13:24:31.917594  529251 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:24:31.929219  529251 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:24:31.929257  529251 kubeadm.go:157] found existing configuration files:
	
	I0127 13:24:31.929335  529251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:24:31.939805  529251 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:24:31.939896  529251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:24:31.950618  529251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:24:31.961004  529251 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:24:31.961085  529251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:24:31.972052  529251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:24:31.982078  529251 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:24:31.982179  529251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:24:31.992541  529251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:24:32.002522  529251 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:24:32.002597  529251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:24:32.012764  529251 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 13:24:32.023018  529251 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:24:32.155121  529251 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:24:33.156774  529251 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.001602831s)
	I0127 13:24:33.156841  529251 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:24:33.380188  529251 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:24:33.455051  529251 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:24:33.548771  529251 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:24:33.548884  529251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:24:34.049334  529251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:24:34.549369  529251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:24:35.049073  529251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:24:35.067996  529251 api_server.go:72] duration metric: took 1.519225919s to wait for apiserver process to appear ...
	I0127 13:24:35.068032  529251 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:24:35.068059  529251 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I0127 13:24:35.068561  529251 api_server.go:269] stopped: https://192.168.39.24:8443/healthz: Get "https://192.168.39.24:8443/healthz": dial tcp 192.168.39.24:8443: connect: connection refused
	I0127 13:24:35.568224  529251 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I0127 13:24:37.683973  529251 api_server.go:279] https://192.168.39.24:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 13:24:37.684016  529251 api_server.go:103] status: https://192.168.39.24:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 13:24:37.684043  529251 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I0127 13:24:37.744017  529251 api_server.go:279] https://192.168.39.24:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 13:24:37.744054  529251 api_server.go:103] status: https://192.168.39.24:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 13:24:38.068441  529251 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I0127 13:24:38.074711  529251 api_server.go:279] https://192.168.39.24:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:24:38.074743  529251 api_server.go:103] status: https://192.168.39.24:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:24:38.568377  529251 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I0127 13:24:38.573670  529251 api_server.go:279] https://192.168.39.24:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:24:38.573715  529251 api_server.go:103] status: https://192.168.39.24:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:24:39.068368  529251 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I0127 13:24:39.073947  529251 api_server.go:279] https://192.168.39.24:8443/healthz returned 200:
	ok
	I0127 13:24:39.082439  529251 api_server.go:141] control plane version: v1.32.1
	I0127 13:24:39.082529  529251 api_server.go:131] duration metric: took 4.014487035s to wait for apiserver health ...
	I0127 13:24:39.082552  529251 cni.go:84] Creating CNI manager for ""
	I0127 13:24:39.082561  529251 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 13:24:39.084038  529251 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 13:24:39.085282  529251 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 13:24:39.118729  529251 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 13:24:39.144937  529251 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:24:39.161887  529251 system_pods.go:59] 8 kube-system pods found
	I0127 13:24:39.161954  529251 system_pods.go:61] "coredns-668d6bf9bc-fhbhd" [382f580a-4a26-41cc-9bb5-04abed4ef9da] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 13:24:39.161968  529251 system_pods.go:61] "etcd-embed-certs-766944" [5cf96470-1803-4915-8e70-e1dd2b6b54ca] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 13:24:39.161983  529251 system_pods.go:61] "kube-apiserver-embed-certs-766944" [bb988b74-5cab-4974-9b0c-67bd211516ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 13:24:39.161998  529251 system_pods.go:61] "kube-controller-manager-embed-certs-766944" [742a0276-c078-4a3a-b301-a300bdeb19f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 13:24:39.162011  529251 system_pods.go:61] "kube-proxy-2nc6c" [2621c663-1079-41ba-84cc-07e46d3c7c94] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0127 13:24:39.162023  529251 system_pods.go:61] "kube-scheduler-embed-certs-766944" [91c13a7f-e225-41b6-b7ec-dee704fb7c94] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 13:24:39.162033  529251 system_pods.go:61] "metrics-server-f79f97bbb-dr7mb" [110ed5ec-a9dd-4d16-9233-b4a7eba69561] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 13:24:39.162045  529251 system_pods.go:61] "storage-provisioner" [c2c08c35-e4ec-4c73-b0c0-f8a32e43f125] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 13:24:39.162054  529251 system_pods.go:74] duration metric: took 17.086709ms to wait for pod list to return data ...
	I0127 13:24:39.162069  529251 node_conditions.go:102] verifying NodePressure condition ...
	I0127 13:24:39.166586  529251 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 13:24:39.166621  529251 node_conditions.go:123] node cpu capacity is 2
	I0127 13:24:39.166637  529251 node_conditions.go:105] duration metric: took 4.562709ms to run NodePressure ...
	I0127 13:24:39.166669  529251 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:24:39.489303  529251 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0127 13:24:39.494236  529251 kubeadm.go:739] kubelet initialised
	I0127 13:24:39.494261  529251 kubeadm.go:740] duration metric: took 4.866616ms waiting for restarted kubelet to initialise ...
	I0127 13:24:39.494272  529251 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:24:39.500902  529251 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-fhbhd" in "kube-system" namespace to be "Ready" ...
	I0127 13:24:41.511877  529251 pod_ready.go:103] pod "coredns-668d6bf9bc-fhbhd" in "kube-system" namespace has status "Ready":"False"
	I0127 13:24:44.009183  529251 pod_ready.go:103] pod "coredns-668d6bf9bc-fhbhd" in "kube-system" namespace has status "Ready":"False"
	I0127 13:24:46.507144  529251 pod_ready.go:103] pod "coredns-668d6bf9bc-fhbhd" in "kube-system" namespace has status "Ready":"False"
	I0127 13:24:48.507898  529251 pod_ready.go:103] pod "coredns-668d6bf9bc-fhbhd" in "kube-system" namespace has status "Ready":"False"
	I0127 13:24:50.520746  529251 pod_ready.go:93] pod "coredns-668d6bf9bc-fhbhd" in "kube-system" namespace has status "Ready":"True"
	I0127 13:24:50.520787  529251 pod_ready.go:82] duration metric: took 11.019846179s for pod "coredns-668d6bf9bc-fhbhd" in "kube-system" namespace to be "Ready" ...
	I0127 13:24:50.520801  529251 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:24:50.528014  529251 pod_ready.go:93] pod "etcd-embed-certs-766944" in "kube-system" namespace has status "Ready":"True"
	I0127 13:24:50.528049  529251 pod_ready.go:82] duration metric: took 7.239184ms for pod "etcd-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:24:50.528061  529251 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:24:50.536172  529251 pod_ready.go:93] pod "kube-apiserver-embed-certs-766944" in "kube-system" namespace has status "Ready":"True"
	I0127 13:24:50.536201  529251 pod_ready.go:82] duration metric: took 8.133506ms for pod "kube-apiserver-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:24:50.536212  529251 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:24:50.543333  529251 pod_ready.go:93] pod "kube-controller-manager-embed-certs-766944" in "kube-system" namespace has status "Ready":"True"
	I0127 13:24:50.543362  529251 pod_ready.go:82] duration metric: took 7.142309ms for pod "kube-controller-manager-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:24:50.543378  529251 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-2nc6c" in "kube-system" namespace to be "Ready" ...
	I0127 13:24:50.552262  529251 pod_ready.go:93] pod "kube-proxy-2nc6c" in "kube-system" namespace has status "Ready":"True"
	I0127 13:24:50.552298  529251 pod_ready.go:82] duration metric: took 8.910744ms for pod "kube-proxy-2nc6c" in "kube-system" namespace to be "Ready" ...
	I0127 13:24:50.552313  529251 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:24:52.560855  529251 pod_ready.go:103] pod "kube-scheduler-embed-certs-766944" in "kube-system" namespace has status "Ready":"False"
	I0127 13:24:53.561890  529251 pod_ready.go:93] pod "kube-scheduler-embed-certs-766944" in "kube-system" namespace has status "Ready":"True"
	I0127 13:24:53.561920  529251 pod_ready.go:82] duration metric: took 3.00959913s for pod "kube-scheduler-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:24:53.561931  529251 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace to be "Ready" ...
	I0127 13:24:55.572578  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:24:58.074641  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:00.570381  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:03.078748  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:05.569059  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:07.570727  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:10.068359  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:12.070807  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:14.072055  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:16.570805  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:19.069825  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:21.568947  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:23.569000  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:26.069139  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:28.069649  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:30.070286  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:32.569439  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:35.067995  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:37.070229  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:39.070760  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:41.568000  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:43.569089  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:46.068942  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:48.568061  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:50.569009  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:52.569162  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:54.569507  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:57.069231  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:59.569768  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:02.071370  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:04.569349  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:07.068749  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:09.069258  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:11.069825  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:13.569207  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:15.569500  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:18.069702  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:20.568469  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:22.570020  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:25.069004  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:27.568942  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:29.569752  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:32.068994  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:34.069950  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:36.569173  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:38.569384  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:40.569529  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:42.570397  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:45.068600  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:47.070787  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:49.568893  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:51.569746  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:54.068896  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:56.069334  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:58.570367  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:01.069755  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:03.569751  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:05.572347  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:08.070491  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:10.569456  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:13.071425  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:15.569670  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:18.068202  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:20.069346  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:22.570622  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:25.069733  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:27.569825  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:30.071032  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:32.570691  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:35.070491  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:37.568868  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:40.068250  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:42.069249  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:44.070063  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:46.568824  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:48.568941  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:50.569496  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:53.069250  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:55.569286  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:57.570520  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:00.070194  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:02.570228  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:05.072284  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:07.568155  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:09.568803  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:11.570065  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:13.574091  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:16.069767  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:18.569948  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:21.070930  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:23.569634  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:25.569972  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:28.072238  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:30.570018  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:33.072832  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:35.073781  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:37.570224  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:39.571200  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:41.571317  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:44.069245  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:46.070769  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:48.070894  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:50.568964  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:52.569160  529251 pod_ready.go:103] pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:53.562674  529251 pod_ready.go:82] duration metric: took 4m0.000725017s for pod "metrics-server-f79f97bbb-dr7mb" in "kube-system" namespace to be "Ready" ...
	E0127 13:28:53.562728  529251 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0127 13:28:53.562748  529251 pod_ready.go:39] duration metric: took 4m14.068466393s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:28:53.562778  529251 kubeadm.go:597] duration metric: took 4m21.776476788s to restartPrimaryControlPlane
	W0127 13:28:53.562860  529251 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 13:28:53.562895  529251 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 13:28:55.406991  529251 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.84407049s)
	I0127 13:28:55.407062  529251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:28:55.426120  529251 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 13:28:55.438195  529251 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:28:55.457399  529251 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:28:55.457425  529251 kubeadm.go:157] found existing configuration files:
	
	I0127 13:28:55.457485  529251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:28:55.469544  529251 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:28:55.469611  529251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:28:55.481065  529251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:28:55.492868  529251 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:28:55.492928  529251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:28:55.505930  529251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:28:55.517268  529251 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:28:55.517332  529251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:28:55.528681  529251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:28:55.539678  529251 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:28:55.539755  529251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:28:55.550987  529251 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 13:28:55.719870  529251 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 13:29:04.273698  529251 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 13:29:04.273779  529251 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 13:29:04.273879  529251 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 13:29:04.274011  529251 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 13:29:04.274137  529251 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 13:29:04.274229  529251 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 13:29:04.275837  529251 out.go:235]   - Generating certificates and keys ...
	I0127 13:29:04.275953  529251 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 13:29:04.276042  529251 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 13:29:04.276162  529251 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 13:29:04.276253  529251 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 13:29:04.276359  529251 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 13:29:04.276440  529251 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 13:29:04.276535  529251 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 13:29:04.276675  529251 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 13:29:04.276764  529251 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 13:29:04.276906  529251 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 13:29:04.276967  529251 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 13:29:04.277065  529251 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 13:29:04.277113  529251 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 13:29:04.277186  529251 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 13:29:04.277274  529251 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 13:29:04.277381  529251 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 13:29:04.277460  529251 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 13:29:04.277559  529251 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 13:29:04.277647  529251 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 13:29:04.280280  529251 out.go:235]   - Booting up control plane ...
	I0127 13:29:04.280412  529251 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 13:29:04.280494  529251 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 13:29:04.280588  529251 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 13:29:04.280708  529251 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 13:29:04.280854  529251 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 13:29:04.280919  529251 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 13:29:04.281101  529251 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 13:29:04.281252  529251 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 13:29:04.281343  529251 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002900104s
	I0127 13:29:04.281472  529251 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 13:29:04.281557  529251 kubeadm.go:310] [api-check] The API server is healthy after 5.001737119s
	I0127 13:29:04.281687  529251 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 13:29:04.281880  529251 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 13:29:04.281947  529251 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 13:29:04.282181  529251 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-766944 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 13:29:04.282286  529251 kubeadm.go:310] [bootstrap-token] Using token: cubj1b.pwpdo0hgbjp08kat
	I0127 13:29:04.283697  529251 out.go:235]   - Configuring RBAC rules ...
	I0127 13:29:04.283851  529251 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 13:29:04.283970  529251 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 13:29:04.284120  529251 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 13:29:04.284293  529251 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 13:29:04.284399  529251 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 13:29:04.284473  529251 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 13:29:04.284576  529251 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 13:29:04.284615  529251 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 13:29:04.284679  529251 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 13:29:04.284689  529251 kubeadm.go:310] 
	I0127 13:29:04.284780  529251 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 13:29:04.284794  529251 kubeadm.go:310] 
	I0127 13:29:04.284891  529251 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 13:29:04.284900  529251 kubeadm.go:310] 
	I0127 13:29:04.284950  529251 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 13:29:04.285047  529251 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 13:29:04.285134  529251 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 13:29:04.285146  529251 kubeadm.go:310] 
	I0127 13:29:04.285267  529251 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 13:29:04.285279  529251 kubeadm.go:310] 
	I0127 13:29:04.285341  529251 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 13:29:04.285356  529251 kubeadm.go:310] 
	I0127 13:29:04.285410  529251 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 13:29:04.285478  529251 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 13:29:04.285536  529251 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 13:29:04.285542  529251 kubeadm.go:310] 
	I0127 13:29:04.285636  529251 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 13:29:04.285723  529251 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 13:29:04.285731  529251 kubeadm.go:310] 
	I0127 13:29:04.285803  529251 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token cubj1b.pwpdo0hgbjp08kat \
	I0127 13:29:04.285958  529251 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:44e7ea386d1f8e7ab1336d835156dd84ecca20069390afc88f04bb1a3c629fd2 \
	I0127 13:29:04.285997  529251 kubeadm.go:310] 	--control-plane 
	I0127 13:29:04.286004  529251 kubeadm.go:310] 
	I0127 13:29:04.286115  529251 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 13:29:04.286121  529251 kubeadm.go:310] 
	I0127 13:29:04.286247  529251 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cubj1b.pwpdo0hgbjp08kat \
	I0127 13:29:04.286407  529251 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:44e7ea386d1f8e7ab1336d835156dd84ecca20069390afc88f04bb1a3c629fd2 
	I0127 13:29:04.286424  529251 cni.go:84] Creating CNI manager for ""
	I0127 13:29:04.286436  529251 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 13:29:04.288049  529251 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 13:29:04.289218  529251 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 13:29:04.306228  529251 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 13:29:04.327835  529251 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 13:29:04.328008  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:04.328068  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-766944 minikube.k8s.io/updated_at=2025_01_27T13_29_04_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=0d71ce9b1959d04f0d9fa7dbc5639a49619ad89b minikube.k8s.io/name=embed-certs-766944 minikube.k8s.io/primary=true
	I0127 13:29:04.340778  529251 ops.go:34] apiserver oom_adj: -16
	I0127 13:29:04.617241  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:05.117682  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:05.618141  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:06.117679  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:06.618036  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:07.118302  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:07.618303  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:08.117464  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:08.221604  529251 kubeadm.go:1113] duration metric: took 3.893670046s to wait for elevateKubeSystemPrivileges
	I0127 13:29:08.221659  529251 kubeadm.go:394] duration metric: took 4m36.506709461s to StartCluster
	I0127 13:29:08.221687  529251 settings.go:142] acquiring lock: {Name:mk070ebf22d35da2704f00750921836dbd2cd121 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:29:08.221784  529251 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20317-466901/kubeconfig
	I0127 13:29:08.223152  529251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-466901/kubeconfig: {Name:mkc116eec378af43ea8fefe45e11af3e19be85bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:29:08.223468  529251 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 13:29:08.223561  529251 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 13:29:08.223686  529251 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-766944"
	I0127 13:29:08.223707  529251 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-766944"
	W0127 13:29:08.223715  529251 addons.go:247] addon storage-provisioner should already be in state true
	I0127 13:29:08.223720  529251 addons.go:69] Setting default-storageclass=true in profile "embed-certs-766944"
	I0127 13:29:08.223775  529251 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-766944"
	I0127 13:29:08.223766  529251 addons.go:69] Setting dashboard=true in profile "embed-certs-766944"
	I0127 13:29:08.223766  529251 addons.go:69] Setting metrics-server=true in profile "embed-certs-766944"
	I0127 13:29:08.223788  529251 config.go:182] Loaded profile config "embed-certs-766944": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 13:29:08.223797  529251 addons.go:238] Setting addon dashboard=true in "embed-certs-766944"
	I0127 13:29:08.223800  529251 addons.go:238] Setting addon metrics-server=true in "embed-certs-766944"
	W0127 13:29:08.223808  529251 addons.go:247] addon metrics-server should already be in state true
	W0127 13:29:08.223808  529251 addons.go:247] addon dashboard should already be in state true
	I0127 13:29:08.223748  529251 host.go:66] Checking if "embed-certs-766944" exists ...
	I0127 13:29:08.223840  529251 host.go:66] Checking if "embed-certs-766944" exists ...
	I0127 13:29:08.223862  529251 host.go:66] Checking if "embed-certs-766944" exists ...
	I0127 13:29:08.224260  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:08.224276  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:08.224288  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:08.224294  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:08.224311  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:08.224322  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:08.224276  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:08.224390  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:08.225260  529251 out.go:177] * Verifying Kubernetes components...
	I0127 13:29:08.226552  529251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:29:08.244300  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46203
	I0127 13:29:08.244514  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46005
	I0127 13:29:08.244516  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39053
	I0127 13:29:08.245012  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:08.245254  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:08.245333  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:08.245603  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:29:08.245621  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:08.245769  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:29:08.245780  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:08.245787  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:29:08.245804  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:08.246187  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:08.246236  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:08.246240  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:08.246450  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetState
	I0127 13:29:08.246858  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:08.246858  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:08.246898  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:08.246908  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:08.246957  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41683
	I0127 13:29:08.247392  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:08.248029  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:29:08.248055  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:08.248479  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:08.249163  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:08.249212  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:08.251401  529251 addons.go:238] Setting addon default-storageclass=true in "embed-certs-766944"
	W0127 13:29:08.251426  529251 addons.go:247] addon default-storageclass should already be in state true
	I0127 13:29:08.251459  529251 host.go:66] Checking if "embed-certs-766944" exists ...
	I0127 13:29:08.251834  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:08.251888  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:08.268388  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36109
	I0127 13:29:08.268957  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:08.269472  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:29:08.269488  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:08.269556  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43787
	I0127 13:29:08.269902  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:08.270014  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:08.270112  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetState
	I0127 13:29:08.270466  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:29:08.270483  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:08.270877  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:08.271178  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetState
	I0127 13:29:08.272419  529251 main.go:141] libmachine: (embed-certs-766944) Calling .DriverName
	I0127 13:29:08.273919  529251 main.go:141] libmachine: (embed-certs-766944) Calling .DriverName
	I0127 13:29:08.274603  529251 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 13:29:08.275601  529251 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:29:08.276632  529251 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 13:29:08.276650  529251 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 13:29:08.276675  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHHostname
	I0127 13:29:08.277578  529251 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:29:08.277591  529251 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 13:29:08.277605  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHHostname
	I0127 13:29:08.278681  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36115
	I0127 13:29:08.279322  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:08.280065  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:29:08.280083  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:08.280587  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:08.280859  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetState
	I0127 13:29:08.282532  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:29:08.282997  529251 main.go:141] libmachine: (embed-certs-766944) Calling .DriverName
	I0127 13:29:08.283505  529251 main.go:141] libmachine: (embed-certs-766944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4a:1b", ip: ""} in network mk-embed-certs-766944: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:21 +0000 UTC Type:0 Mac:52:54:00:73:4a:1b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:embed-certs-766944 Clientid:01:52:54:00:73:4a:1b}
	I0127 13:29:08.283533  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined IP address 192.168.39.24 and MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:29:08.283908  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHPort
	I0127 13:29:08.284083  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHKeyPath
	I0127 13:29:08.284241  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHUsername
	I0127 13:29:08.284285  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:29:08.284416  529251 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/embed-certs-766944/id_rsa Username:docker}
	I0127 13:29:08.284808  529251 main.go:141] libmachine: (embed-certs-766944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4a:1b", ip: ""} in network mk-embed-certs-766944: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:21 +0000 UTC Type:0 Mac:52:54:00:73:4a:1b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:embed-certs-766944 Clientid:01:52:54:00:73:4a:1b}
	I0127 13:29:08.284841  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined IP address 192.168.39.24 and MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:29:08.284853  529251 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 13:29:08.287154  529251 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 13:29:08.287385  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHPort
	I0127 13:29:08.287589  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHKeyPath
	I0127 13:29:08.287760  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHUsername
	I0127 13:29:08.287917  529251 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/embed-certs-766944/id_rsa Username:docker}
	I0127 13:29:08.288316  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 13:29:08.288338  529251 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 13:29:08.288353  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHHostname
	I0127 13:29:08.292370  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:29:08.292819  529251 main.go:141] libmachine: (embed-certs-766944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4a:1b", ip: ""} in network mk-embed-certs-766944: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:21 +0000 UTC Type:0 Mac:52:54:00:73:4a:1b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:embed-certs-766944 Clientid:01:52:54:00:73:4a:1b}
	I0127 13:29:08.292844  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined IP address 192.168.39.24 and MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:29:08.293148  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHPort
	I0127 13:29:08.293268  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41237
	I0127 13:29:08.293441  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHKeyPath
	I0127 13:29:08.293632  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHUsername
	I0127 13:29:08.293671  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:08.293763  529251 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/embed-certs-766944/id_rsa Username:docker}
	I0127 13:29:08.294180  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:29:08.294204  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:08.294614  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:08.295134  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:08.295170  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:08.312630  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35019
	I0127 13:29:08.313201  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:08.314043  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:29:08.314071  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:08.315352  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:08.315586  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetState
	I0127 13:29:08.317764  529251 main.go:141] libmachine: (embed-certs-766944) Calling .DriverName
	I0127 13:29:08.318043  529251 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 13:29:08.318064  529251 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 13:29:08.318087  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHHostname
	I0127 13:29:08.321585  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:29:08.322028  529251 main.go:141] libmachine: (embed-certs-766944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4a:1b", ip: ""} in network mk-embed-certs-766944: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:21 +0000 UTC Type:0 Mac:52:54:00:73:4a:1b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:embed-certs-766944 Clientid:01:52:54:00:73:4a:1b}
	I0127 13:29:08.322057  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined IP address 192.168.39.24 and MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:29:08.322200  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHPort
	I0127 13:29:08.322476  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHKeyPath
	I0127 13:29:08.322607  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHUsername
	I0127 13:29:08.322797  529251 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/embed-certs-766944/id_rsa Username:docker}
	I0127 13:29:08.543349  529251 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:29:08.566526  529251 node_ready.go:35] waiting up to 6m0s for node "embed-certs-766944" to be "Ready" ...
	I0127 13:29:08.581029  529251 node_ready.go:49] node "embed-certs-766944" has status "Ready":"True"
	I0127 13:29:08.581058  529251 node_ready.go:38] duration metric: took 14.437055ms for node "embed-certs-766944" to be "Ready" ...
	I0127 13:29:08.581072  529251 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:29:08.591111  529251 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:08.663492  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 13:29:08.663529  529251 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 13:29:08.708763  529251 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:29:08.731924  529251 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 13:29:08.733763  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 13:29:08.733792  529251 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 13:29:08.816600  529251 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 13:29:08.816646  529251 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 13:29:08.862311  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 13:29:08.862346  529251 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 13:29:08.881791  529251 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 13:29:08.881830  529251 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 13:29:08.965427  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 13:29:08.965468  529251 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 13:29:09.025682  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 13:29:09.025718  529251 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 13:29:09.026871  529251 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:29:09.026896  529251 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 13:29:09.106376  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 13:29:09.106408  529251 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 13:29:09.173153  529251 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:29:09.316157  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 13:29:09.316202  529251 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 13:29:09.518415  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 13:29:09.518455  529251 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 13:29:09.836886  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:29:09.836931  529251 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 13:29:09.974913  529251 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:29:10.529287  529251 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.820478856s)
	I0127 13:29:10.529346  529251 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.797380034s)
	I0127 13:29:10.529398  529251 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:10.529415  529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
	I0127 13:29:10.529355  529251 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:10.529488  529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
	I0127 13:29:10.529871  529251 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:10.529910  529251 main.go:141] libmachine: (embed-certs-766944) DBG | Closing plugin on server side
	I0127 13:29:10.529932  529251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:10.529943  529251 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:10.529951  529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
	I0127 13:29:10.529878  529251 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:10.530045  529251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:10.530070  529251 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:10.530088  529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
	I0127 13:29:10.530265  529251 main.go:141] libmachine: (embed-certs-766944) DBG | Closing plugin on server side
	I0127 13:29:10.530268  529251 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:10.530299  529251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:10.530463  529251 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:10.530482  529251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:10.599533  529251 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:10.599626  529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
	I0127 13:29:10.599978  529251 main.go:141] libmachine: (embed-certs-766944) DBG | Closing plugin on server side
	I0127 13:29:10.600095  529251 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:10.600128  529251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:10.613397  529251 pod_ready.go:103] pod "etcd-embed-certs-766944" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:11.025503  529251 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.852294623s)
	I0127 13:29:11.025583  529251 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:11.025598  529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
	I0127 13:29:11.025974  529251 main.go:141] libmachine: (embed-certs-766944) DBG | Closing plugin on server side
	I0127 13:29:11.026056  529251 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:11.026072  529251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:11.026081  529251 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:11.026094  529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
	I0127 13:29:11.026369  529251 main.go:141] libmachine: (embed-certs-766944) DBG | Closing plugin on server side
	I0127 13:29:11.026430  529251 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:11.026446  529251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:11.026465  529251 addons.go:479] Verifying addon metrics-server=true in "embed-certs-766944"
	I0127 13:29:11.846156  529251 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.871176785s)
	I0127 13:29:11.846235  529251 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:11.846258  529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
	I0127 13:29:11.846647  529251 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:11.846693  529251 main.go:141] libmachine: (embed-certs-766944) DBG | Closing plugin on server side
	I0127 13:29:11.846706  529251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:11.846720  529251 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:11.846730  529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
	I0127 13:29:11.847020  529251 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:11.847069  529251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:11.849004  529251 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-766944 addons enable metrics-server
	
	I0127 13:29:11.850858  529251 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 13:29:11.852345  529251 addons.go:514] duration metric: took 3.628795827s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 13:29:13.097655  529251 pod_ready.go:103] pod "etcd-embed-certs-766944" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:15.100860  529251 pod_ready.go:103] pod "etcd-embed-certs-766944" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:16.102026  529251 pod_ready.go:93] pod "etcd-embed-certs-766944" in "kube-system" namespace has status "Ready":"True"
	I0127 13:29:16.102064  529251 pod_ready.go:82] duration metric: took 7.510920671s for pod "etcd-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:16.102080  529251 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:16.108782  529251 pod_ready.go:93] pod "kube-apiserver-embed-certs-766944" in "kube-system" namespace has status "Ready":"True"
	I0127 13:29:16.108818  529251 pod_ready.go:82] duration metric: took 6.727536ms for pod "kube-apiserver-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:16.108832  529251 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:16.117964  529251 pod_ready.go:93] pod "kube-controller-manager-embed-certs-766944" in "kube-system" namespace has status "Ready":"True"
	I0127 13:29:16.117994  529251 pod_ready.go:82] duration metric: took 9.151947ms for pod "kube-controller-manager-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:16.118008  529251 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vp88s" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:16.125633  529251 pod_ready.go:93] pod "kube-proxy-vp88s" in "kube-system" namespace has status "Ready":"True"
	I0127 13:29:16.125657  529251 pod_ready.go:82] duration metric: took 7.641622ms for pod "kube-proxy-vp88s" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:16.125667  529251 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:16.141368  529251 pod_ready.go:93] pod "kube-scheduler-embed-certs-766944" in "kube-system" namespace has status "Ready":"True"
	I0127 13:29:16.141395  529251 pod_ready.go:82] duration metric: took 15.721182ms for pod "kube-scheduler-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:16.141403  529251 pod_ready.go:39] duration metric: took 7.560318089s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:29:16.141421  529251 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:29:16.141484  529251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:29:16.168318  529251 api_server.go:72] duration metric: took 7.944806249s to wait for apiserver process to appear ...
	I0127 13:29:16.168353  529251 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:29:16.168382  529251 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I0127 13:29:16.178242  529251 api_server.go:279] https://192.168.39.24:8443/healthz returned 200:
	ok
	I0127 13:29:16.179663  529251 api_server.go:141] control plane version: v1.32.1
	I0127 13:29:16.179696  529251 api_server.go:131] duration metric: took 11.33324ms to wait for apiserver health ...
	I0127 13:29:16.179706  529251 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:29:16.299895  529251 system_pods.go:59] 9 kube-system pods found
	I0127 13:29:16.299927  529251 system_pods.go:61] "coredns-668d6bf9bc-9h4k2" [0eb84d56-e399-4808-afda-b0e1ec4f201f] Running
	I0127 13:29:16.299933  529251 system_pods.go:61] "coredns-668d6bf9bc-wf444" [7afc402e-ab81-4eb5-b2cf-08be738f171d] Running
	I0127 13:29:16.299937  529251 system_pods.go:61] "etcd-embed-certs-766944" [22be64ef-9ba9-4750-aca9-f34b01b46f16] Running
	I0127 13:29:16.299941  529251 system_pods.go:61] "kube-apiserver-embed-certs-766944" [397082cc-acad-493c-8ddd-9f49def9100a] Running
	I0127 13:29:16.299945  529251 system_pods.go:61] "kube-controller-manager-embed-certs-766944" [fe84cf8b-7074-485b-a16e-d75b52b9fe15] Running
	I0127 13:29:16.299948  529251 system_pods.go:61] "kube-proxy-vp88s" [18e5bf87-73fb-43c4-a73e-b2f21a1bb7b8] Running
	I0127 13:29:16.299951  529251 system_pods.go:61] "kube-scheduler-embed-certs-766944" [96587dc6-6fbd-4d22-acfa-09a89f1e711a] Running
	I0127 13:29:16.299956  529251 system_pods.go:61] "metrics-server-f79f97bbb-27dz9" [9f604bd3-a953-4a12-b1bc-48e4e4c8bb4d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 13:29:16.299962  529251 system_pods.go:61] "storage-provisioner" [7d91f3a9-4b10-40fa-84bc-9d881d955319] Running
	I0127 13:29:16.299973  529251 system_pods.go:74] duration metric: took 120.259661ms to wait for pod list to return data ...
	I0127 13:29:16.299984  529251 default_sa.go:34] waiting for default service account to be created ...
	I0127 13:29:16.496603  529251 default_sa.go:45] found service account: "default"
	I0127 13:29:16.496645  529251 default_sa.go:55] duration metric: took 196.6512ms for default service account to be created ...
	I0127 13:29:16.496658  529251 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 13:29:16.702376  529251 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p embed-certs-766944 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-766944 -n embed-certs-766944
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-766944 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-766944 logs -n 25: (1.421372814s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p old-k8s-version-116657                              | old-k8s-version-116657       | jenkins | v1.35.0 | 27 Jan 25 13:23 UTC | 27 Jan 25 13:24 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-325431                  | no-preload-325431            | jenkins | v1.35.0 | 27 Jan 25 13:23 UTC | 27 Jan 25 13:23 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-325431                                   | no-preload-325431            | jenkins | v1.35.0 | 27 Jan 25 13:23 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-766944                 | embed-certs-766944           | jenkins | v1.35.0 | 27 Jan 25 13:24 UTC | 27 Jan 25 13:24 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-766944                                  | embed-certs-766944           | jenkins | v1.35.0 | 27 Jan 25 13:24 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-325510       | default-k8s-diff-port-325510 | jenkins | v1.35.0 | 27 Jan 25 13:24 UTC | 27 Jan 25 13:24 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-325510 | jenkins | v1.35.0 | 27 Jan 25 13:24 UTC |                     |
	|         | default-k8s-diff-port-325510                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-116657             | old-k8s-version-116657       | jenkins | v1.35.0 | 27 Jan 25 13:24 UTC | 27 Jan 25 13:24 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-116657                              | old-k8s-version-116657       | jenkins | v1.35.0 | 27 Jan 25 13:24 UTC | 27 Jan 25 13:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | old-k8s-version-116657 image                           | old-k8s-version-116657       | jenkins | v1.35.0 | 27 Jan 25 13:27 UTC | 27 Jan 25 13:27 UTC |
	|         | list --format=json                                     |                              |         |         |                     |                     |
	| pause   | -p old-k8s-version-116657                              | old-k8s-version-116657       | jenkins | v1.35.0 | 27 Jan 25 13:27 UTC | 27 Jan 25 13:27 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p old-k8s-version-116657                              | old-k8s-version-116657       | jenkins | v1.35.0 | 27 Jan 25 13:27 UTC | 27 Jan 25 13:27 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-116657                              | old-k8s-version-116657       | jenkins | v1.35.0 | 27 Jan 25 13:27 UTC | 27 Jan 25 13:27 UTC |
	| delete  | -p old-k8s-version-116657                              | old-k8s-version-116657       | jenkins | v1.35.0 | 27 Jan 25 13:27 UTC | 27 Jan 25 13:27 UTC |
	| start   | -p newest-cni-296225 --memory=2200 --alsologtostderr   | newest-cni-296225            | jenkins | v1.35.0 | 27 Jan 25 13:27 UTC | 27 Jan 25 13:28 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=containerd          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-296225             | newest-cni-296225            | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC | 27 Jan 25 13:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-296225                                   | newest-cni-296225            | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC | 27 Jan 25 13:28 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-296225                  | newest-cni-296225            | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC | 27 Jan 25 13:28 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-296225 --memory=2200 --alsologtostderr   | newest-cni-296225            | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC | 27 Jan 25 13:29 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=containerd          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-296225 image list                           | newest-cni-296225            | jenkins | v1.35.0 | 27 Jan 25 13:29 UTC | 27 Jan 25 13:29 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-296225                                   | newest-cni-296225            | jenkins | v1.35.0 | 27 Jan 25 13:29 UTC | 27 Jan 25 13:29 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-296225                                   | newest-cni-296225            | jenkins | v1.35.0 | 27 Jan 25 13:29 UTC | 27 Jan 25 13:29 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-296225                                   | newest-cni-296225            | jenkins | v1.35.0 | 27 Jan 25 13:29 UTC | 27 Jan 25 13:29 UTC |
	| delete  | -p newest-cni-296225                                   | newest-cni-296225            | jenkins | v1.35.0 | 27 Jan 25 13:29 UTC | 27 Jan 25 13:29 UTC |
	| delete  | -p no-preload-325431                                   | no-preload-325431            | jenkins | v1.35.0 | 27 Jan 25 13:50 UTC | 27 Jan 25 13:50 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 13:28:56
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 13:28:56.167206  531586 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:28:56.167420  531586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:28:56.167436  531586 out.go:358] Setting ErrFile to fd 2...
	I0127 13:28:56.167442  531586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:28:56.167737  531586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-466901/.minikube/bin
	I0127 13:28:56.168827  531586 out.go:352] Setting JSON to false
	I0127 13:28:56.169977  531586 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":36633,"bootTime":1737947903,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 13:28:56.170093  531586 start.go:139] virtualization: kvm guest
	I0127 13:28:56.172461  531586 out.go:177] * [newest-cni-296225] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 13:28:56.174020  531586 notify.go:220] Checking for updates...
	I0127 13:28:56.174033  531586 out.go:177]   - MINIKUBE_LOCATION=20317
	I0127 13:28:56.175512  531586 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:28:56.176838  531586 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20317-466901/kubeconfig
	I0127 13:28:56.178184  531586 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-466901/.minikube
	I0127 13:28:56.179518  531586 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 13:28:56.180891  531586 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 13:28:56.182708  531586 config.go:182] Loaded profile config "newest-cni-296225": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 13:28:56.183131  531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:28:56.183194  531586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:28:56.200308  531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43915
	I0127 13:28:56.201060  531586 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:28:56.201765  531586 main.go:141] libmachine: Using API Version  1
	I0127 13:28:56.201797  531586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:28:56.202181  531586 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:28:56.202408  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:28:56.202728  531586 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 13:28:56.203250  531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:28:56.203319  531586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:28:56.220011  531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33981
	I0127 13:28:56.220435  531586 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:28:56.220978  531586 main.go:141] libmachine: Using API Version  1
	I0127 13:28:56.221006  531586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:28:56.221409  531586 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:28:56.221606  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:28:56.258580  531586 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 13:28:56.260066  531586 start.go:297] selected driver: kvm2
	I0127 13:28:56.260097  531586 start.go:901] validating driver "kvm2" against &{Name:newest-cni-296225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-296225 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.46 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenA
ddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:28:56.260225  531586 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 13:28:56.260938  531586 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:28:56.261024  531586 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20317-466901/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 13:28:56.277111  531586 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 13:28:56.277523  531586 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 13:28:56.277560  531586 cni.go:84] Creating CNI manager for ""
	I0127 13:28:56.277605  531586 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 13:28:56.277639  531586 start.go:340] cluster config:
	{Name:newest-cni-296225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-296225 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.46 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:28:56.277740  531586 iso.go:125] acquiring lock: {Name:mkcc3db98c9d4661e75c49bd9b203d0232dff8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:28:56.280361  531586 out.go:177] * Starting "newest-cni-296225" primary control-plane node in "newest-cni-296225" cluster
	I0127 13:28:56.281606  531586 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 13:28:56.281678  531586 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20317-466901/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
	I0127 13:28:56.281692  531586 cache.go:56] Caching tarball of preloaded images
	I0127 13:28:56.281783  531586 preload.go:172] Found /home/jenkins/minikube-integration/20317-466901/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 13:28:56.281796  531586 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
	I0127 13:28:56.281935  531586 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225/config.json ...
	I0127 13:28:56.282191  531586 start.go:360] acquireMachinesLock for newest-cni-296225: {Name:mke115b779db52cb0a5f0a05f83d5bad0a35c561 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 13:28:56.282273  531586 start.go:364] duration metric: took 45.538µs to acquireMachinesLock for "newest-cni-296225"
	I0127 13:28:56.282297  531586 start.go:96] Skipping create...Using existing machine configuration
	I0127 13:28:56.282306  531586 fix.go:54] fixHost starting: 
	I0127 13:28:56.282589  531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:28:56.282621  531586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:28:56.298876  531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38851
	I0127 13:28:56.299391  531586 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:28:56.299946  531586 main.go:141] libmachine: Using API Version  1
	I0127 13:28:56.299975  531586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:28:56.300339  531586 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:28:56.300605  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:28:56.300813  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetState
	I0127 13:28:56.302631  531586 fix.go:112] recreateIfNeeded on newest-cni-296225: state=Stopped err=<nil>
	I0127 13:28:56.302659  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	W0127 13:28:56.302822  531586 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 13:28:56.304762  531586 out.go:177] * Restarting existing kvm2 VM for "newest-cni-296225" ...
	I0127 13:28:53.806392  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:55.806518  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:57.808012  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:55.406991  529251 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.84407049s)
	I0127 13:28:55.407062  529251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:28:55.426120  529251 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 13:28:55.438195  529251 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:28:55.457399  529251 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:28:55.457425  529251 kubeadm.go:157] found existing configuration files:
	
	I0127 13:28:55.457485  529251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:28:55.469544  529251 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:28:55.469611  529251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:28:55.481065  529251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:28:55.492868  529251 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:28:55.492928  529251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:28:55.505930  529251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:28:55.517268  529251 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:28:55.517332  529251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:28:55.528681  529251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:28:55.539678  529251 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:28:55.539755  529251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:28:55.550987  529251 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 13:28:55.719870  529251 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 13:28:56.306046  531586 main.go:141] libmachine: (newest-cni-296225) Calling .Start
	I0127 13:28:56.306254  531586 main.go:141] libmachine: (newest-cni-296225) starting domain...
	I0127 13:28:56.306277  531586 main.go:141] libmachine: (newest-cni-296225) ensuring networks are active...
	I0127 13:28:56.307157  531586 main.go:141] libmachine: (newest-cni-296225) Ensuring network default is active
	I0127 13:28:56.307587  531586 main.go:141] libmachine: (newest-cni-296225) Ensuring network mk-newest-cni-296225 is active
	I0127 13:28:56.307960  531586 main.go:141] libmachine: (newest-cni-296225) getting domain XML...
	I0127 13:28:56.308646  531586 main.go:141] libmachine: (newest-cni-296225) creating domain...
	I0127 13:28:57.604425  531586 main.go:141] libmachine: (newest-cni-296225) waiting for IP...
	I0127 13:28:57.605479  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:28:57.606123  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:28:57.606254  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:28:57.606079  531622 retry.go:31] will retry after 235.333873ms: waiting for domain to come up
	I0127 13:28:57.843349  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:28:57.843843  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:28:57.843877  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:28:57.843796  531622 retry.go:31] will retry after 261.244379ms: waiting for domain to come up
	I0127 13:28:58.107236  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:28:58.107847  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:28:58.107885  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:28:58.107815  531622 retry.go:31] will retry after 367.467141ms: waiting for domain to come up
	I0127 13:28:58.477662  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:28:58.478416  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:28:58.478454  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:28:58.478385  531622 retry.go:31] will retry after 466.451127ms: waiting for domain to come up
	I0127 13:28:58.946239  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:28:58.946809  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:28:58.946854  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:28:58.946766  531622 retry.go:31] will retry after 559.614953ms: waiting for domain to come up
	I0127 13:28:59.507817  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:28:59.508251  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:28:59.508317  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:28:59.508231  531622 retry.go:31] will retry after 651.013274ms: waiting for domain to come up
	I0127 13:29:00.161338  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:00.161916  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:29:00.161944  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:29:00.161879  531622 retry.go:31] will retry after 780.526485ms: waiting for domain to come up
	I0127 13:29:00.944251  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:00.944845  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:29:00.944875  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:29:00.944817  531622 retry.go:31] will retry after 1.304098s: waiting for domain to come up
	I0127 13:28:59.808090  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:01.808480  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:04.273698  529251 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 13:29:04.273779  529251 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 13:29:04.273879  529251 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 13:29:04.274011  529251 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 13:29:04.274137  529251 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 13:29:04.274229  529251 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 13:29:04.275837  529251 out.go:235]   - Generating certificates and keys ...
	I0127 13:29:04.275953  529251 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 13:29:04.276042  529251 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 13:29:04.276162  529251 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 13:29:04.276253  529251 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 13:29:04.276359  529251 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 13:29:04.276440  529251 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 13:29:04.276535  529251 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 13:29:04.276675  529251 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 13:29:04.276764  529251 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 13:29:04.276906  529251 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 13:29:04.276967  529251 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 13:29:04.277065  529251 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 13:29:04.277113  529251 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 13:29:04.277186  529251 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 13:29:04.277274  529251 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 13:29:04.277381  529251 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 13:29:04.277460  529251 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 13:29:04.277559  529251 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 13:29:04.277647  529251 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 13:29:04.280280  529251 out.go:235]   - Booting up control plane ...
	I0127 13:29:04.280412  529251 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 13:29:04.280494  529251 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 13:29:04.280588  529251 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 13:29:04.280708  529251 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 13:29:04.280854  529251 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 13:29:04.280919  529251 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 13:29:04.281101  529251 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 13:29:04.281252  529251 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 13:29:04.281343  529251 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002900104s
	I0127 13:29:04.281472  529251 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 13:29:04.281557  529251 kubeadm.go:310] [api-check] The API server is healthy after 5.001737119s
	I0127 13:29:04.281687  529251 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 13:29:04.281880  529251 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 13:29:04.281947  529251 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 13:29:04.282181  529251 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-766944 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 13:29:04.282286  529251 kubeadm.go:310] [bootstrap-token] Using token: cubj1b.pwpdo0hgbjp08kat
	I0127 13:29:04.283697  529251 out.go:235]   - Configuring RBAC rules ...
	I0127 13:29:04.283851  529251 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 13:29:04.283970  529251 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 13:29:04.284120  529251 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 13:29:04.284293  529251 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 13:29:04.284399  529251 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 13:29:04.284473  529251 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 13:29:04.284576  529251 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 13:29:04.284615  529251 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 13:29:04.284679  529251 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 13:29:04.284689  529251 kubeadm.go:310] 
	I0127 13:29:04.284780  529251 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 13:29:04.284794  529251 kubeadm.go:310] 
	I0127 13:29:04.284891  529251 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 13:29:04.284900  529251 kubeadm.go:310] 
	I0127 13:29:04.284950  529251 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 13:29:04.285047  529251 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 13:29:04.285134  529251 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 13:29:04.285146  529251 kubeadm.go:310] 
	I0127 13:29:04.285267  529251 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 13:29:04.285279  529251 kubeadm.go:310] 
	I0127 13:29:04.285341  529251 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 13:29:04.285356  529251 kubeadm.go:310] 
	I0127 13:29:04.285410  529251 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 13:29:04.285478  529251 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 13:29:04.285536  529251 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 13:29:04.285542  529251 kubeadm.go:310] 
	I0127 13:29:04.285636  529251 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 13:29:04.285723  529251 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 13:29:04.285731  529251 kubeadm.go:310] 
	I0127 13:29:04.285803  529251 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token cubj1b.pwpdo0hgbjp08kat \
	I0127 13:29:04.285958  529251 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:44e7ea386d1f8e7ab1336d835156dd84ecca20069390afc88f04bb1a3c629fd2 \
	I0127 13:29:04.285997  529251 kubeadm.go:310] 	--control-plane 
	I0127 13:29:04.286004  529251 kubeadm.go:310] 
	I0127 13:29:04.286115  529251 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 13:29:04.286121  529251 kubeadm.go:310] 
	I0127 13:29:04.286247  529251 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cubj1b.pwpdo0hgbjp08kat \
	I0127 13:29:04.286407  529251 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:44e7ea386d1f8e7ab1336d835156dd84ecca20069390afc88f04bb1a3c629fd2 
	I0127 13:29:04.286424  529251 cni.go:84] Creating CNI manager for ""
	I0127 13:29:04.286436  529251 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 13:29:04.288049  529251 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 13:29:02.250183  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:02.250724  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:29:02.250759  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:29:02.250691  531622 retry.go:31] will retry after 1.464046224s: waiting for domain to come up
	I0127 13:29:03.716441  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:03.716968  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:29:03.716995  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:29:03.716911  531622 retry.go:31] will retry after 1.473613486s: waiting for domain to come up
	I0127 13:29:05.192629  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:05.193220  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:29:05.193256  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:29:05.193184  531622 retry.go:31] will retry after 1.906374841s: waiting for domain to come up
	I0127 13:29:04.289218  529251 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 13:29:04.306228  529251 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 13:29:04.327835  529251 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 13:29:04.328008  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:04.328068  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-766944 minikube.k8s.io/updated_at=2025_01_27T13_29_04_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=0d71ce9b1959d04f0d9fa7dbc5639a49619ad89b minikube.k8s.io/name=embed-certs-766944 minikube.k8s.io/primary=true
	I0127 13:29:04.340778  529251 ops.go:34] apiserver oom_adj: -16
	I0127 13:29:04.617241  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:05.117682  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:05.618141  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:06.117679  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:06.618036  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:07.118302  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:07.618303  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:08.117464  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:08.221604  529251 kubeadm.go:1113] duration metric: took 3.893670046s to wait for elevateKubeSystemPrivileges
	I0127 13:29:08.221659  529251 kubeadm.go:394] duration metric: took 4m36.506709461s to StartCluster
	I0127 13:29:08.221687  529251 settings.go:142] acquiring lock: {Name:mk070ebf22d35da2704f00750921836dbd2cd121 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:29:08.221784  529251 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20317-466901/kubeconfig
	I0127 13:29:08.223152  529251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-466901/kubeconfig: {Name:mkc116eec378af43ea8fefe45e11af3e19be85bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:29:08.223468  529251 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 13:29:08.223561  529251 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 13:29:08.223686  529251 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-766944"
	I0127 13:29:08.223707  529251 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-766944"
	W0127 13:29:08.223715  529251 addons.go:247] addon storage-provisioner should already be in state true
	I0127 13:29:08.223720  529251 addons.go:69] Setting default-storageclass=true in profile "embed-certs-766944"
	I0127 13:29:08.223775  529251 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-766944"
	I0127 13:29:08.223766  529251 addons.go:69] Setting dashboard=true in profile "embed-certs-766944"
	I0127 13:29:08.223766  529251 addons.go:69] Setting metrics-server=true in profile "embed-certs-766944"
	I0127 13:29:08.223788  529251 config.go:182] Loaded profile config "embed-certs-766944": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 13:29:08.223797  529251 addons.go:238] Setting addon dashboard=true in "embed-certs-766944"
	I0127 13:29:08.223800  529251 addons.go:238] Setting addon metrics-server=true in "embed-certs-766944"
	W0127 13:29:08.223808  529251 addons.go:247] addon metrics-server should already be in state true
	W0127 13:29:08.223808  529251 addons.go:247] addon dashboard should already be in state true
	I0127 13:29:08.223748  529251 host.go:66] Checking if "embed-certs-766944" exists ...
	I0127 13:29:08.223840  529251 host.go:66] Checking if "embed-certs-766944" exists ...
	I0127 13:29:08.223862  529251 host.go:66] Checking if "embed-certs-766944" exists ...
	I0127 13:29:08.224260  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:08.224276  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:08.224288  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:08.224294  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:08.224311  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:08.224322  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:08.224276  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:08.224390  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:08.225260  529251 out.go:177] * Verifying Kubernetes components...
	I0127 13:29:08.226552  529251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:29:08.244300  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46203
	I0127 13:29:08.244514  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46005
	I0127 13:29:08.244516  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39053
	I0127 13:29:08.245012  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:08.245254  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:08.245333  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:08.245603  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:29:08.245621  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:08.245769  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:29:08.245780  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:08.245787  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:29:08.245804  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:08.246187  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:08.246236  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:08.246240  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:08.246450  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetState
	I0127 13:29:08.246858  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:08.246858  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:08.246898  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:08.246908  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:08.246957  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41683
	I0127 13:29:08.247392  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:08.248029  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:29:08.248055  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:08.248479  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:08.249163  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:08.249212  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:08.251401  529251 addons.go:238] Setting addon default-storageclass=true in "embed-certs-766944"
	W0127 13:29:08.251426  529251 addons.go:247] addon default-storageclass should already be in state true
	I0127 13:29:08.251459  529251 host.go:66] Checking if "embed-certs-766944" exists ...
	I0127 13:29:08.251834  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:08.251888  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:08.268388  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36109
	I0127 13:29:08.268957  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:08.269472  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:29:08.269488  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:08.269556  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43787
	I0127 13:29:08.269902  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:08.270014  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:08.270112  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetState
	I0127 13:29:08.270466  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:29:08.270483  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:08.270877  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:08.271178  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetState
	I0127 13:29:08.272419  529251 main.go:141] libmachine: (embed-certs-766944) Calling .DriverName
	I0127 13:29:08.273919  529251 main.go:141] libmachine: (embed-certs-766944) Calling .DriverName
	I0127 13:29:08.274603  529251 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 13:29:08.275601  529251 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:29:08.276632  529251 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 13:29:08.276650  529251 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 13:29:08.276675  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHHostname
	I0127 13:29:08.277578  529251 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:29:08.277591  529251 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 13:29:08.277605  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHHostname
	I0127 13:29:08.278681  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36115
	I0127 13:29:08.279322  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:08.280065  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:29:08.280083  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:08.280587  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:08.280859  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetState
	I0127 13:29:08.282532  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:29:08.282997  529251 main.go:141] libmachine: (embed-certs-766944) Calling .DriverName
	I0127 13:29:08.283505  529251 main.go:141] libmachine: (embed-certs-766944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4a:1b", ip: ""} in network mk-embed-certs-766944: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:21 +0000 UTC Type:0 Mac:52:54:00:73:4a:1b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:embed-certs-766944 Clientid:01:52:54:00:73:4a:1b}
	I0127 13:29:08.283533  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined IP address 192.168.39.24 and MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:29:08.283908  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHPort
	I0127 13:29:08.284083  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHKeyPath
	I0127 13:29:08.284241  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHUsername
	I0127 13:29:08.284285  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:29:08.284416  529251 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/embed-certs-766944/id_rsa Username:docker}
	I0127 13:29:08.284808  529251 main.go:141] libmachine: (embed-certs-766944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4a:1b", ip: ""} in network mk-embed-certs-766944: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:21 +0000 UTC Type:0 Mac:52:54:00:73:4a:1b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:embed-certs-766944 Clientid:01:52:54:00:73:4a:1b}
	I0127 13:29:08.284841  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined IP address 192.168.39.24 and MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:29:08.284853  529251 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 13:29:03.808549  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:05.809379  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:08.287154  529251 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 13:29:08.287385  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHPort
	I0127 13:29:08.287589  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHKeyPath
	I0127 13:29:08.287760  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHUsername
	I0127 13:29:08.287917  529251 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/embed-certs-766944/id_rsa Username:docker}
	I0127 13:29:08.288316  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 13:29:08.288338  529251 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 13:29:08.288353  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHHostname
	I0127 13:29:08.292370  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:29:08.292819  529251 main.go:141] libmachine: (embed-certs-766944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4a:1b", ip: ""} in network mk-embed-certs-766944: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:21 +0000 UTC Type:0 Mac:52:54:00:73:4a:1b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:embed-certs-766944 Clientid:01:52:54:00:73:4a:1b}
	I0127 13:29:08.292844  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined IP address 192.168.39.24 and MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:29:08.293148  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHPort
	I0127 13:29:08.293268  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41237
	I0127 13:29:08.293441  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHKeyPath
	I0127 13:29:08.293632  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHUsername
	I0127 13:29:08.293671  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:08.293763  529251 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/embed-certs-766944/id_rsa Username:docker}
	I0127 13:29:08.294180  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:29:08.294204  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:08.294614  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:08.295134  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:08.295170  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:08.312630  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35019
	I0127 13:29:08.313201  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:08.314043  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:29:08.314071  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:08.315352  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:08.315586  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetState
	I0127 13:29:08.317764  529251 main.go:141] libmachine: (embed-certs-766944) Calling .DriverName
	I0127 13:29:08.318043  529251 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 13:29:08.318064  529251 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 13:29:08.318087  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHHostname
	I0127 13:29:08.321585  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:29:08.322028  529251 main.go:141] libmachine: (embed-certs-766944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4a:1b", ip: ""} in network mk-embed-certs-766944: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:21 +0000 UTC Type:0 Mac:52:54:00:73:4a:1b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:embed-certs-766944 Clientid:01:52:54:00:73:4a:1b}
	I0127 13:29:08.322057  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined IP address 192.168.39.24 and MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:29:08.322200  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHPort
	I0127 13:29:08.322476  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHKeyPath
	I0127 13:29:08.322607  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHUsername
	I0127 13:29:08.322797  529251 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/embed-certs-766944/id_rsa Username:docker}
	I0127 13:29:08.543349  529251 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:29:08.566526  529251 node_ready.go:35] waiting up to 6m0s for node "embed-certs-766944" to be "Ready" ...
	I0127 13:29:08.581029  529251 node_ready.go:49] node "embed-certs-766944" has status "Ready":"True"
	I0127 13:29:08.581058  529251 node_ready.go:38] duration metric: took 14.437055ms for node "embed-certs-766944" to be "Ready" ...
	I0127 13:29:08.581072  529251 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:29:08.591111  529251 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:08.663492  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 13:29:08.663529  529251 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 13:29:08.708763  529251 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:29:08.731924  529251 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 13:29:08.733763  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 13:29:08.733792  529251 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 13:29:08.816600  529251 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 13:29:08.816646  529251 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 13:29:08.862311  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 13:29:08.862346  529251 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 13:29:08.881791  529251 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 13:29:08.881830  529251 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 13:29:08.965427  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 13:29:08.965468  529251 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 13:29:09.025682  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 13:29:09.025718  529251 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 13:29:09.026871  529251 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:29:09.026896  529251 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 13:29:09.106376  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 13:29:09.106408  529251 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 13:29:09.173153  529251 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:29:07.101069  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:07.101691  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:29:07.101724  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:29:07.101645  531622 retry.go:31] will retry after 3.3503886s: waiting for domain to come up
	I0127 13:29:10.454092  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:10.454611  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:29:10.454643  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:29:10.454550  531622 retry.go:31] will retry after 2.977667559s: waiting for domain to come up
	I0127 13:29:09.316157  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 13:29:09.316202  529251 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 13:29:09.518415  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 13:29:09.518455  529251 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 13:29:09.836886  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:29:09.836931  529251 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 13:29:09.974913  529251 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:29:10.529287  529251 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.820478856s)
	I0127 13:29:10.529346  529251 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.797380034s)
	I0127 13:29:10.529398  529251 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:10.529415  529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
	I0127 13:29:10.529355  529251 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:10.529488  529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
	I0127 13:29:10.529871  529251 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:10.529910  529251 main.go:141] libmachine: (embed-certs-766944) DBG | Closing plugin on server side
	I0127 13:29:10.529932  529251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:10.529943  529251 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:10.529951  529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
	I0127 13:29:10.529878  529251 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:10.530045  529251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:10.530070  529251 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:10.530088  529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
	I0127 13:29:10.530265  529251 main.go:141] libmachine: (embed-certs-766944) DBG | Closing plugin on server side
	I0127 13:29:10.530268  529251 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:10.530299  529251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:10.530463  529251 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:10.530482  529251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:10.599533  529251 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:10.599626  529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
	I0127 13:29:10.599978  529251 main.go:141] libmachine: (embed-certs-766944) DBG | Closing plugin on server side
	I0127 13:29:10.600095  529251 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:10.600128  529251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:10.613397  529251 pod_ready.go:103] pod "etcd-embed-certs-766944" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:11.025503  529251 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.852294623s)
	I0127 13:29:11.025583  529251 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:11.025598  529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
	I0127 13:29:11.025974  529251 main.go:141] libmachine: (embed-certs-766944) DBG | Closing plugin on server side
	I0127 13:29:11.026056  529251 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:11.026072  529251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:11.026081  529251 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:11.026094  529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
	I0127 13:29:11.026369  529251 main.go:141] libmachine: (embed-certs-766944) DBG | Closing plugin on server side
	I0127 13:29:11.026430  529251 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:11.026446  529251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:11.026465  529251 addons.go:479] Verifying addon metrics-server=true in "embed-certs-766944"
	I0127 13:29:11.846156  529251 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.871176785s)
	I0127 13:29:11.846235  529251 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:11.846258  529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
	I0127 13:29:11.846647  529251 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:11.846693  529251 main.go:141] libmachine: (embed-certs-766944) DBG | Closing plugin on server side
	I0127 13:29:11.846706  529251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:11.846720  529251 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:11.846730  529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
	I0127 13:29:11.847020  529251 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:11.847069  529251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:11.849004  529251 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-766944 addons enable metrics-server
	
	I0127 13:29:11.850858  529251 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 13:29:08.309241  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:10.806393  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:12.808038  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:11.852345  529251 addons.go:514] duration metric: took 3.628795827s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 13:29:13.097655  529251 pod_ready.go:103] pod "etcd-embed-certs-766944" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:13.433798  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:13.434282  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:29:13.434324  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:29:13.434271  531622 retry.go:31] will retry after 5.418420331s: waiting for domain to come up
	I0127 13:29:14.300254  529417 pod_ready.go:82] duration metric: took 4m0.000130065s for pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace to be "Ready" ...
	E0127 13:29:14.300291  529417 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0127 13:29:14.300324  529417 pod_ready.go:39] duration metric: took 4m12.210910321s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:29:14.300355  529417 kubeadm.go:597] duration metric: took 4m20.336267253s to restartPrimaryControlPlane
	W0127 13:29:14.300420  529417 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 13:29:14.300449  529417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 13:29:16.335301  529417 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.034816955s)
	I0127 13:29:16.335395  529417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:29:16.352998  529417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 13:29:16.365092  529417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:29:16.378733  529417 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:29:16.378758  529417 kubeadm.go:157] found existing configuration files:
	
	I0127 13:29:16.378804  529417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0127 13:29:16.395924  529417 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:29:16.396005  529417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:29:16.408496  529417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0127 13:29:16.418917  529417 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:29:16.418986  529417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:29:16.429065  529417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0127 13:29:16.439234  529417 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:29:16.439333  529417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:29:16.449865  529417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0127 13:29:16.460738  529417 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:29:16.460831  529417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:29:16.472411  529417 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 13:29:16.642625  529417 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 13:29:15.100860  529251 pod_ready.go:103] pod "etcd-embed-certs-766944" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:16.102026  529251 pod_ready.go:93] pod "etcd-embed-certs-766944" in "kube-system" namespace has status "Ready":"True"
	I0127 13:29:16.102064  529251 pod_ready.go:82] duration metric: took 7.510920671s for pod "etcd-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:16.102080  529251 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:16.108782  529251 pod_ready.go:93] pod "kube-apiserver-embed-certs-766944" in "kube-system" namespace has status "Ready":"True"
	I0127 13:29:16.108818  529251 pod_ready.go:82] duration metric: took 6.727536ms for pod "kube-apiserver-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:16.108832  529251 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:16.117964  529251 pod_ready.go:93] pod "kube-controller-manager-embed-certs-766944" in "kube-system" namespace has status "Ready":"True"
	I0127 13:29:16.117994  529251 pod_ready.go:82] duration metric: took 9.151947ms for pod "kube-controller-manager-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:16.118008  529251 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vp88s" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:16.125633  529251 pod_ready.go:93] pod "kube-proxy-vp88s" in "kube-system" namespace has status "Ready":"True"
	I0127 13:29:16.125657  529251 pod_ready.go:82] duration metric: took 7.641622ms for pod "kube-proxy-vp88s" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:16.125667  529251 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:16.141368  529251 pod_ready.go:93] pod "kube-scheduler-embed-certs-766944" in "kube-system" namespace has status "Ready":"True"
	I0127 13:29:16.141395  529251 pod_ready.go:82] duration metric: took 15.721182ms for pod "kube-scheduler-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:16.141403  529251 pod_ready.go:39] duration metric: took 7.560318089s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:29:16.141421  529251 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:29:16.141484  529251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:29:16.168318  529251 api_server.go:72] duration metric: took 7.944806249s to wait for apiserver process to appear ...
	I0127 13:29:16.168353  529251 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:29:16.168382  529251 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I0127 13:29:16.178242  529251 api_server.go:279] https://192.168.39.24:8443/healthz returned 200:
	ok
	I0127 13:29:16.179663  529251 api_server.go:141] control plane version: v1.32.1
	I0127 13:29:16.179696  529251 api_server.go:131] duration metric: took 11.33324ms to wait for apiserver health ...
	I0127 13:29:16.179706  529251 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:29:16.299895  529251 system_pods.go:59] 9 kube-system pods found
	I0127 13:29:16.299927  529251 system_pods.go:61] "coredns-668d6bf9bc-9h4k2" [0eb84d56-e399-4808-afda-b0e1ec4f201f] Running
	I0127 13:29:16.299933  529251 system_pods.go:61] "coredns-668d6bf9bc-wf444" [7afc402e-ab81-4eb5-b2cf-08be738f171d] Running
	I0127 13:29:16.299937  529251 system_pods.go:61] "etcd-embed-certs-766944" [22be64ef-9ba9-4750-aca9-f34b01b46f16] Running
	I0127 13:29:16.299941  529251 system_pods.go:61] "kube-apiserver-embed-certs-766944" [397082cc-acad-493c-8ddd-9f49def9100a] Running
	I0127 13:29:16.299945  529251 system_pods.go:61] "kube-controller-manager-embed-certs-766944" [fe84cf8b-7074-485b-a16e-d75b52b9fe15] Running
	I0127 13:29:16.299948  529251 system_pods.go:61] "kube-proxy-vp88s" [18e5bf87-73fb-43c4-a73e-b2f21a1bb7b8] Running
	I0127 13:29:16.299951  529251 system_pods.go:61] "kube-scheduler-embed-certs-766944" [96587dc6-6fbd-4d22-acfa-09a89f1e711a] Running
	I0127 13:29:16.299956  529251 system_pods.go:61] "metrics-server-f79f97bbb-27dz9" [9f604bd3-a953-4a12-b1bc-48e4e4c8bb4d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 13:29:16.299962  529251 system_pods.go:61] "storage-provisioner" [7d91f3a9-4b10-40fa-84bc-9d881d955319] Running
	I0127 13:29:16.299973  529251 system_pods.go:74] duration metric: took 120.259661ms to wait for pod list to return data ...
	I0127 13:29:16.299984  529251 default_sa.go:34] waiting for default service account to be created ...
	I0127 13:29:16.496603  529251 default_sa.go:45] found service account: "default"
	I0127 13:29:16.496645  529251 default_sa.go:55] duration metric: took 196.6512ms for default service account to be created ...
	I0127 13:29:16.496658  529251 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 13:29:16.702376  529251 system_pods.go:87] 9 kube-system pods found
	I0127 13:29:18.854257  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:18.854914  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has current primary IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:18.854944  531586 main.go:141] libmachine: (newest-cni-296225) found domain IP: 192.168.72.46
	I0127 13:29:18.854956  531586 main.go:141] libmachine: (newest-cni-296225) reserving static IP address...
	I0127 13:29:18.855436  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "newest-cni-296225", mac: "52:54:00:25:60:c9", ip: "192.168.72.46"} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:18.855466  531586 main.go:141] libmachine: (newest-cni-296225) DBG | skip adding static IP to network mk-newest-cni-296225 - found existing host DHCP lease matching {name: "newest-cni-296225", mac: "52:54:00:25:60:c9", ip: "192.168.72.46"}
	I0127 13:29:18.855480  531586 main.go:141] libmachine: (newest-cni-296225) reserved static IP address 192.168.72.46 for domain newest-cni-296225
	I0127 13:29:18.855493  531586 main.go:141] libmachine: (newest-cni-296225) waiting for SSH...
	I0127 13:29:18.855509  531586 main.go:141] libmachine: (newest-cni-296225) DBG | Getting to WaitForSSH function...
	I0127 13:29:18.858091  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:18.858477  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:18.858507  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:18.858705  531586 main.go:141] libmachine: (newest-cni-296225) DBG | Using SSH client type: external
	I0127 13:29:18.858725  531586 main.go:141] libmachine: (newest-cni-296225) DBG | Using SSH private key: /home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa (-rw-------)
	I0127 13:29:18.858760  531586 main.go:141] libmachine: (newest-cni-296225) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.46 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 13:29:18.858784  531586 main.go:141] libmachine: (newest-cni-296225) DBG | About to run SSH command:
	I0127 13:29:18.858806  531586 main.go:141] libmachine: (newest-cni-296225) DBG | exit 0
	I0127 13:29:18.996896  531586 main.go:141] libmachine: (newest-cni-296225) DBG | SSH cmd err, output: <nil>: 
	I0127 13:29:18.997263  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetConfigRaw
	I0127 13:29:18.998035  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetIP
	I0127 13:29:19.001537  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.001980  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:19.002005  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.002524  531586 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225/config.json ...
	I0127 13:29:19.002778  531586 machine.go:93] provisionDockerMachine start ...
	I0127 13:29:19.002804  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:29:19.003111  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:19.006300  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.006759  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:19.006788  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.006991  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
	I0127 13:29:19.007221  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:19.007434  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:19.007600  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
	I0127 13:29:19.007802  531586 main.go:141] libmachine: Using SSH client type: native
	I0127 13:29:19.008050  531586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.46 22 <nil> <nil>}
	I0127 13:29:19.008068  531586 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 13:29:19.124549  531586 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 13:29:19.124589  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetMachineName
	I0127 13:29:19.124921  531586 buildroot.go:166] provisioning hostname "newest-cni-296225"
	I0127 13:29:19.124953  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetMachineName
	I0127 13:29:19.125168  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:19.128509  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.128870  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:19.128904  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.129136  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
	I0127 13:29:19.129338  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:19.129489  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:19.129682  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
	I0127 13:29:19.129915  531586 main.go:141] libmachine: Using SSH client type: native
	I0127 13:29:19.130181  531586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.46 22 <nil> <nil>}
	I0127 13:29:19.130202  531586 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-296225 && echo "newest-cni-296225" | sudo tee /etc/hostname
	I0127 13:29:19.274181  531586 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-296225
	
	I0127 13:29:19.274233  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:19.277975  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.278540  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:19.278575  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.278963  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
	I0127 13:29:19.279243  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:19.279514  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:19.279686  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
	I0127 13:29:19.279898  531586 main.go:141] libmachine: Using SSH client type: native
	I0127 13:29:19.280149  531586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.46 22 <nil> <nil>}
	I0127 13:29:19.280176  531586 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-296225' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-296225/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-296225' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 13:29:19.425977  531586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 13:29:19.426016  531586 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20317-466901/.minikube CaCertPath:/home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20317-466901/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20317-466901/.minikube}
	I0127 13:29:19.426066  531586 buildroot.go:174] setting up certificates
	I0127 13:29:19.426080  531586 provision.go:84] configureAuth start
	I0127 13:29:19.426092  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetMachineName
	I0127 13:29:19.426372  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetIP
	I0127 13:29:19.429756  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.430201  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:19.430230  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.430467  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:19.432982  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.433352  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:19.433381  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.433508  531586 provision.go:143] copyHostCerts
	I0127 13:29:19.433596  531586 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-466901/.minikube/ca.pem, removing ...
	I0127 13:29:19.433613  531586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-466901/.minikube/ca.pem
	I0127 13:29:19.433713  531586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20317-466901/.minikube/ca.pem (1082 bytes)
	I0127 13:29:19.433862  531586 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-466901/.minikube/cert.pem, removing ...
	I0127 13:29:19.433898  531586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-466901/.minikube/cert.pem
	I0127 13:29:19.433952  531586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20317-466901/.minikube/cert.pem (1123 bytes)
	I0127 13:29:19.434069  531586 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-466901/.minikube/key.pem, removing ...
	I0127 13:29:19.434083  531586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-466901/.minikube/key.pem
	I0127 13:29:19.434121  531586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20317-466901/.minikube/key.pem (1675 bytes)
	I0127 13:29:19.434225  531586 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20317-466901/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca-key.pem org=jenkins.newest-cni-296225 san=[127.0.0.1 192.168.72.46 localhost minikube newest-cni-296225]
	I0127 13:29:19.616134  531586 provision.go:177] copyRemoteCerts
	I0127 13:29:19.616230  531586 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 13:29:19.616268  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:19.619632  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.620115  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:19.620170  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.620627  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
	I0127 13:29:19.620882  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:19.621062  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
	I0127 13:29:19.621267  531586 sshutil.go:53] new ssh client: &{IP:192.168.72.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa Username:docker}
	I0127 13:29:19.716453  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0127 13:29:19.751558  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0127 13:29:19.787164  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 13:29:19.822729  531586 provision.go:87] duration metric: took 396.632166ms to configureAuth
	I0127 13:29:19.822766  531586 buildroot.go:189] setting minikube options for container-runtime
	I0127 13:29:19.823021  531586 config.go:182] Loaded profile config "newest-cni-296225": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 13:29:19.823035  531586 machine.go:96] duration metric: took 820.241874ms to provisionDockerMachine
	I0127 13:29:19.823044  531586 start.go:293] postStartSetup for "newest-cni-296225" (driver="kvm2")
	I0127 13:29:19.823074  531586 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 13:29:19.823125  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:29:19.823524  531586 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 13:29:19.823610  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:19.826416  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.826837  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:19.826869  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.827189  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
	I0127 13:29:19.827424  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:19.827641  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
	I0127 13:29:19.827800  531586 sshutil.go:53] new ssh client: &{IP:192.168.72.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa Username:docker}
	I0127 13:29:19.922618  531586 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 13:29:19.927700  531586 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 13:29:19.927740  531586 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-466901/.minikube/addons for local assets ...
	I0127 13:29:19.927820  531586 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-466901/.minikube/files for local assets ...
	I0127 13:29:19.927920  531586 filesync.go:149] local asset: /home/jenkins/minikube-integration/20317-466901/.minikube/files/etc/ssl/certs/4742752.pem -> 4742752.pem in /etc/ssl/certs
	I0127 13:29:19.928047  531586 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 13:29:19.940393  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/files/etc/ssl/certs/4742752.pem --> /etc/ssl/certs/4742752.pem (1708 bytes)
	I0127 13:29:19.970138  531586 start.go:296] duration metric: took 147.059526ms for postStartSetup
	I0127 13:29:19.970186  531586 fix.go:56] duration metric: took 23.687879815s for fixHost
	I0127 13:29:19.970213  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:19.973696  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.974136  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:19.974162  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.974433  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
	I0127 13:29:19.974671  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:19.974863  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:19.975000  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
	I0127 13:29:19.975177  531586 main.go:141] libmachine: Using SSH client type: native
	I0127 13:29:19.975406  531586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.46 22 <nil> <nil>}
	I0127 13:29:19.975421  531586 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 13:29:20.097158  531586 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737984560.051374432
	
	I0127 13:29:20.097195  531586 fix.go:216] guest clock: 1737984560.051374432
	I0127 13:29:20.097205  531586 fix.go:229] Guest: 2025-01-27 13:29:20.051374432 +0000 UTC Remote: 2025-01-27 13:29:19.970191951 +0000 UTC m=+23.842107580 (delta=81.182481ms)
	I0127 13:29:20.097251  531586 fix.go:200] guest clock delta is within tolerance: 81.182481ms
	I0127 13:29:20.097264  531586 start.go:83] releasing machines lock for "newest-cni-296225", held for 23.814976228s
	I0127 13:29:20.097302  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:29:20.097604  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetIP
	I0127 13:29:20.101191  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:20.101642  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:20.101693  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:20.101991  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:29:20.102587  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:29:20.102797  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:29:20.102930  531586 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 13:29:20.102980  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:20.103025  531586 ssh_runner.go:195] Run: cat /version.json
	I0127 13:29:20.103054  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:20.106331  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:20.106785  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:20.106843  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:20.106883  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:20.107100  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
	I0127 13:29:20.107355  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:20.107415  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:20.107456  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:20.107545  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
	I0127 13:29:20.107711  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
	I0127 13:29:20.107752  531586 sshutil.go:53] new ssh client: &{IP:192.168.72.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa Username:docker}
	I0127 13:29:20.107851  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:20.108004  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
	I0127 13:29:20.108175  531586 sshutil.go:53] new ssh client: &{IP:192.168.72.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa Username:docker}
	I0127 13:29:20.198167  531586 ssh_runner.go:195] Run: systemctl --version
	I0127 13:29:20.220547  531586 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 13:29:20.228913  531586 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 13:29:20.229009  531586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 13:29:20.252220  531586 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 13:29:20.252252  531586 start.go:495] detecting cgroup driver to use...
	I0127 13:29:20.252336  531586 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 13:29:20.290040  531586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 13:29:20.307723  531586 docker.go:217] disabling cri-docker service (if available) ...
	I0127 13:29:20.307812  531586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 13:29:20.323473  531586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 13:29:20.339833  531586 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 13:29:20.476188  531586 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 13:29:20.632180  531586 docker.go:233] disabling docker service ...
	I0127 13:29:20.632272  531586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 13:29:20.647480  531586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 13:29:20.662456  531586 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 13:29:20.849643  531586 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 13:29:21.014719  531586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 13:29:21.034260  531586 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 13:29:21.055949  531586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 13:29:21.068764  531586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 13:29:21.083524  531586 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 13:29:21.083605  531586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 13:29:21.098914  531586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 13:29:21.113664  531586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 13:29:21.127826  531586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 13:29:21.139382  531586 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 13:29:21.151342  531586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 13:29:21.162384  531586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 13:29:21.174714  531586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 13:29:21.188361  531586 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 13:29:21.201837  531586 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 13:29:21.201921  531586 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 13:29:21.216404  531586 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 13:29:21.226169  531586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:29:21.347858  531586 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 13:29:21.387449  531586 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0127 13:29:21.387582  531586 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 13:29:21.393515  531586 retry.go:31] will retry after 514.05687ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0127 13:29:21.908225  531586 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 13:29:21.917708  531586 start.go:563] Will wait 60s for crictl version
	I0127 13:29:21.917786  531586 ssh_runner.go:195] Run: which crictl
	I0127 13:29:21.923989  531586 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 13:29:21.981569  531586 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0127 13:29:21.981675  531586 ssh_runner.go:195] Run: containerd --version
	I0127 13:29:22.027649  531586 ssh_runner.go:195] Run: containerd --version
	I0127 13:29:22.060339  531586 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	I0127 13:29:22.061787  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetIP
	I0127 13:29:22.065481  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:22.065908  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:22.065946  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:22.066183  531586 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0127 13:29:22.070907  531586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:29:22.089788  531586 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0127 13:29:25.581414  529417 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 13:29:25.581498  529417 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 13:29:25.581603  529417 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 13:29:25.581744  529417 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 13:29:25.581857  529417 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 13:29:25.581911  529417 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 13:29:25.583668  529417 out.go:235]   - Generating certificates and keys ...
	I0127 13:29:25.583784  529417 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 13:29:25.583864  529417 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 13:29:25.583999  529417 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 13:29:25.584094  529417 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 13:29:25.584212  529417 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 13:29:25.584290  529417 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 13:29:25.584368  529417 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 13:29:25.584490  529417 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 13:29:25.584607  529417 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 13:29:25.584736  529417 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 13:29:25.584797  529417 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 13:29:25.584859  529417 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 13:29:25.584911  529417 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 13:29:25.584981  529417 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 13:29:25.585070  529417 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 13:29:25.585182  529417 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 13:29:25.585291  529417 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 13:29:25.585425  529417 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 13:29:25.585505  529417 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 13:29:25.587922  529417 out.go:235]   - Booting up control plane ...
	I0127 13:29:25.588008  529417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 13:29:25.588109  529417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 13:29:25.588212  529417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 13:29:25.588306  529417 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 13:29:25.588407  529417 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 13:29:25.588476  529417 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 13:29:25.588653  529417 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 13:29:25.588744  529417 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 13:29:25.588806  529417 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.424535ms
	I0127 13:29:25.588894  529417 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 13:29:25.588947  529417 kubeadm.go:310] [api-check] The API server is healthy after 6.003546574s
	I0127 13:29:25.589042  529417 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 13:29:25.589188  529417 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 13:29:25.589243  529417 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 13:29:25.589423  529417 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-325510 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 13:29:25.589477  529417 kubeadm.go:310] [bootstrap-token] Using token: pmveah.4ebz9u5xjcadsa8l
	I0127 13:29:25.590661  529417 out.go:235]   - Configuring RBAC rules ...
	I0127 13:29:25.590772  529417 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 13:29:25.590884  529417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 13:29:25.591076  529417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 13:29:25.591309  529417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 13:29:25.591477  529417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 13:29:25.591601  529417 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 13:29:25.591734  529417 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 13:29:25.591810  529417 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 13:29:25.591869  529417 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 13:29:25.591879  529417 kubeadm.go:310] 
	I0127 13:29:25.591954  529417 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 13:29:25.591974  529417 kubeadm.go:310] 
	I0127 13:29:25.592097  529417 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 13:29:25.592115  529417 kubeadm.go:310] 
	I0127 13:29:25.592151  529417 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 13:29:25.592237  529417 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 13:29:25.592327  529417 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 13:29:25.592337  529417 kubeadm.go:310] 
	I0127 13:29:25.592390  529417 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 13:29:25.592397  529417 kubeadm.go:310] 
	I0127 13:29:25.592435  529417 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 13:29:25.592439  529417 kubeadm.go:310] 
	I0127 13:29:25.592512  529417 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 13:29:25.592614  529417 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 13:29:25.592674  529417 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 13:29:25.592682  529417 kubeadm.go:310] 
	I0127 13:29:25.592801  529417 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 13:29:25.592928  529417 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 13:29:25.592941  529417 kubeadm.go:310] 
	I0127 13:29:25.593032  529417 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token pmveah.4ebz9u5xjcadsa8l \
	I0127 13:29:25.593158  529417 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:44e7ea386d1f8e7ab1336d835156dd84ecca20069390afc88f04bb1a3c629fd2 \
	I0127 13:29:25.593193  529417 kubeadm.go:310] 	--control-plane 
	I0127 13:29:25.593206  529417 kubeadm.go:310] 
	I0127 13:29:25.593328  529417 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 13:29:25.593347  529417 kubeadm.go:310] 
	I0127 13:29:25.593453  529417 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token pmveah.4ebz9u5xjcadsa8l \
	I0127 13:29:25.593643  529417 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:44e7ea386d1f8e7ab1336d835156dd84ecca20069390afc88f04bb1a3c629fd2 
	I0127 13:29:25.593663  529417 cni.go:84] Creating CNI manager for ""
	I0127 13:29:25.593674  529417 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 13:29:25.595331  529417 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 13:29:22.091203  531586 kubeadm.go:883] updating cluster {Name:newest-cni-296225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-296225 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.46 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 13:29:22.091437  531586 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 13:29:22.091524  531586 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:29:22.133513  531586 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 13:29:22.133543  531586 containerd.go:534] Images already preloaded, skipping extraction
	I0127 13:29:22.133614  531586 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:29:22.172620  531586 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 13:29:22.172654  531586 cache_images.go:84] Images are preloaded, skipping loading
	I0127 13:29:22.172666  531586 kubeadm.go:934] updating node { 192.168.72.46 8443 v1.32.1 containerd true true} ...
	I0127 13:29:22.172814  531586 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-296225 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.46
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:newest-cni-296225 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 13:29:22.172904  531586 ssh_runner.go:195] Run: sudo crictl info
	I0127 13:29:22.221421  531586 cni.go:84] Creating CNI manager for ""
	I0127 13:29:22.221446  531586 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 13:29:22.221457  531586 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0127 13:29:22.221483  531586 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.46 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-296225 NodeName:newest-cni-296225 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.46"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.46 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 13:29:22.221619  531586 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.46
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-296225"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.46"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.46"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 13:29:22.221696  531586 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 13:29:22.233206  531586 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 13:29:22.233298  531586 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 13:29:22.247498  531586 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0127 13:29:22.265563  531586 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 13:29:22.283377  531586 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I0127 13:29:22.304627  531586 ssh_runner.go:195] Run: grep 192.168.72.46	control-plane.minikube.internal$ /etc/hosts
	I0127 13:29:22.310093  531586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.46	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:29:22.328149  531586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:29:22.474894  531586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:29:22.498792  531586 certs.go:68] Setting up /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225 for IP: 192.168.72.46
	I0127 13:29:22.498819  531586 certs.go:194] generating shared ca certs ...
	I0127 13:29:22.498848  531586 certs.go:226] acquiring lock for ca certs: {Name:mk60f2aac78eb363c5e06a00675357d94c0df88d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:29:22.499080  531586 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20317-466901/.minikube/ca.key
	I0127 13:29:22.499144  531586 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20317-466901/.minikube/proxy-client-ca.key
	I0127 13:29:22.499160  531586 certs.go:256] generating profile certs ...
	I0127 13:29:22.499295  531586 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225/client.key
	I0127 13:29:22.499368  531586 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225/apiserver.key.1b824597
	I0127 13:29:22.499428  531586 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225/proxy-client.key
	I0127 13:29:22.499576  531586 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/474275.pem (1338 bytes)
	W0127 13:29:22.499617  531586 certs.go:480] ignoring /home/jenkins/minikube-integration/20317-466901/.minikube/certs/474275_empty.pem, impossibly tiny 0 bytes
	I0127 13:29:22.499632  531586 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 13:29:22.499663  531586 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem (1082 bytes)
	I0127 13:29:22.499700  531586 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/cert.pem (1123 bytes)
	I0127 13:29:22.499734  531586 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/key.pem (1675 bytes)
	I0127 13:29:22.499790  531586 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/files/etc/ssl/certs/4742752.pem (1708 bytes)
	I0127 13:29:22.500650  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 13:29:22.551481  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 13:29:22.590593  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 13:29:22.630918  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 13:29:22.660478  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0127 13:29:22.696686  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 13:29:22.724193  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 13:29:22.752949  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 13:29:22.784814  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 13:29:22.812321  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/certs/474275.pem --> /usr/share/ca-certificates/474275.pem (1338 bytes)
	I0127 13:29:22.842249  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/files/etc/ssl/certs/4742752.pem --> /usr/share/ca-certificates/4742752.pem (1708 bytes)
	I0127 13:29:22.872391  531586 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 13:29:22.898310  531586 ssh_runner.go:195] Run: openssl version
	I0127 13:29:22.905518  531586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 13:29:22.917623  531586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:29:22.922904  531586 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 12:10 /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:29:22.922982  531586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:29:22.929666  531586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 13:29:22.941982  531586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/474275.pem && ln -fs /usr/share/ca-certificates/474275.pem /etc/ssl/certs/474275.pem"
	I0127 13:29:22.955315  531586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/474275.pem
	I0127 13:29:22.962079  531586 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 12:18 /usr/share/ca-certificates/474275.pem
	I0127 13:29:22.962157  531586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/474275.pem
	I0127 13:29:22.971599  531586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/474275.pem /etc/ssl/certs/51391683.0"
	I0127 13:29:22.985012  531586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4742752.pem && ln -fs /usr/share/ca-certificates/4742752.pem /etc/ssl/certs/4742752.pem"
	I0127 13:29:22.998788  531586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4742752.pem
	I0127 13:29:23.005232  531586 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 12:18 /usr/share/ca-certificates/4742752.pem
	I0127 13:29:23.005312  531586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4742752.pem
	I0127 13:29:23.013471  531586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4742752.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 13:29:23.028126  531586 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 13:29:23.033971  531586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 13:29:23.041089  531586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 13:29:23.048533  531586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 13:29:23.056641  531586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 13:29:23.065453  531586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 13:29:23.074452  531586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 13:29:23.083360  531586 kubeadm.go:392] StartCluster: {Name:newest-cni-296225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-296225 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.46 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: S
ubnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:29:23.083511  531586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0127 13:29:23.083604  531586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:29:23.138902  531586 cri.go:89] found id: "d766c6e246b1ecdf40d93c7468225dc4fad90ed651b6fe2c936d5fcc3d3d5170"
	I0127 13:29:23.138937  531586 cri.go:89] found id: "b555d58cf5206b0a6fd83f0168cfb804792c0d83433705e5ac60118320fece46"
	I0127 13:29:23.138941  531586 cri.go:89] found id: "ab62389a943f48c8078929c4529e496d71a5839cc3224de20672cab59cf3d31f"
	I0127 13:29:23.138945  531586 cri.go:89] found id: "e035113c19405073ac2218fc9137ccfb808c5e6b9a0a15344c76d9b3e648cf67"
	I0127 13:29:23.138947  531586 cri.go:89] found id: "7005555f3a67b3371c40da7f69569f4070f3d54977562479fd46b12e40341ee1"
	I0127 13:29:23.138952  531586 cri.go:89] found id: "5ff6602b0f8a4cd5e4cb51ca77e920e00b1c9e20d02131be56addb081e9027cc"
	I0127 13:29:23.138955  531586 cri.go:89] found id: "2614ff10025bce8287c075ab3139b2d06632fdb7cd672a7be31fbff64ffdea9b"
	I0127 13:29:23.138958  531586 cri.go:89] found id: ""
	I0127 13:29:23.139005  531586 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0127 13:29:23.161523  531586 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-27T13:29:23Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0127 13:29:23.161644  531586 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 13:29:23.177352  531586 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 13:29:23.177377  531586 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 13:29:23.177436  531586 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 13:29:23.190684  531586 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 13:29:23.191837  531586 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-296225" does not appear in /home/jenkins/minikube-integration/20317-466901/kubeconfig
	I0127 13:29:23.192568  531586 kubeconfig.go:62] /home/jenkins/minikube-integration/20317-466901/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-296225" cluster setting kubeconfig missing "newest-cni-296225" context setting]
	I0127 13:29:23.193462  531586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-466901/kubeconfig: {Name:mkc116eec378af43ea8fefe45e11af3e19be85bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:29:23.195884  531586 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 13:29:23.210992  531586 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.46
	I0127 13:29:23.211040  531586 kubeadm.go:1160] stopping kube-system containers ...
	I0127 13:29:23.211058  531586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0127 13:29:23.211141  531586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:29:23.266429  531586 cri.go:89] found id: "d766c6e246b1ecdf40d93c7468225dc4fad90ed651b6fe2c936d5fcc3d3d5170"
	I0127 13:29:23.266458  531586 cri.go:89] found id: "b555d58cf5206b0a6fd83f0168cfb804792c0d83433705e5ac60118320fece46"
	I0127 13:29:23.266464  531586 cri.go:89] found id: "ab62389a943f48c8078929c4529e496d71a5839cc3224de20672cab59cf3d31f"
	I0127 13:29:23.266468  531586 cri.go:89] found id: "e035113c19405073ac2218fc9137ccfb808c5e6b9a0a15344c76d9b3e648cf67"
	I0127 13:29:23.266472  531586 cri.go:89] found id: "7005555f3a67b3371c40da7f69569f4070f3d54977562479fd46b12e40341ee1"
	I0127 13:29:23.266477  531586 cri.go:89] found id: "5ff6602b0f8a4cd5e4cb51ca77e920e00b1c9e20d02131be56addb081e9027cc"
	I0127 13:29:23.266481  531586 cri.go:89] found id: "2614ff10025bce8287c075ab3139b2d06632fdb7cd672a7be31fbff64ffdea9b"
	I0127 13:29:23.266485  531586 cri.go:89] found id: ""
	I0127 13:29:23.266492  531586 cri.go:252] Stopping containers: [d766c6e246b1ecdf40d93c7468225dc4fad90ed651b6fe2c936d5fcc3d3d5170 b555d58cf5206b0a6fd83f0168cfb804792c0d83433705e5ac60118320fece46 ab62389a943f48c8078929c4529e496d71a5839cc3224de20672cab59cf3d31f e035113c19405073ac2218fc9137ccfb808c5e6b9a0a15344c76d9b3e648cf67 7005555f3a67b3371c40da7f69569f4070f3d54977562479fd46b12e40341ee1 5ff6602b0f8a4cd5e4cb51ca77e920e00b1c9e20d02131be56addb081e9027cc 2614ff10025bce8287c075ab3139b2d06632fdb7cd672a7be31fbff64ffdea9b]
	I0127 13:29:23.266560  531586 ssh_runner.go:195] Run: which crictl
	I0127 13:29:23.272382  531586 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 d766c6e246b1ecdf40d93c7468225dc4fad90ed651b6fe2c936d5fcc3d3d5170 b555d58cf5206b0a6fd83f0168cfb804792c0d83433705e5ac60118320fece46 ab62389a943f48c8078929c4529e496d71a5839cc3224de20672cab59cf3d31f e035113c19405073ac2218fc9137ccfb808c5e6b9a0a15344c76d9b3e648cf67 7005555f3a67b3371c40da7f69569f4070f3d54977562479fd46b12e40341ee1 5ff6602b0f8a4cd5e4cb51ca77e920e00b1c9e20d02131be56addb081e9027cc 2614ff10025bce8287c075ab3139b2d06632fdb7cd672a7be31fbff64ffdea9b
	I0127 13:29:23.324924  531586 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 13:29:23.345385  531586 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:29:23.359679  531586 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:29:23.359712  531586 kubeadm.go:157] found existing configuration files:
	
	I0127 13:29:23.359774  531586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:29:23.371542  531586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:29:23.371634  531586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:29:23.383083  531586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:29:23.393186  531586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:29:23.393267  531586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:29:23.406589  531586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:29:23.417348  531586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:29:23.417444  531586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:29:23.430008  531586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:29:23.441860  531586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:29:23.441965  531586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:29:23.452352  531586 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 13:29:23.463556  531586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:29:23.634151  531586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:29:24.791692  531586 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.15748875s)
	I0127 13:29:24.791732  531586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:29:25.027708  531586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:29:25.110706  531586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:29:25.211743  531586 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:29:25.211882  531586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:29:25.712041  531586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:29:25.596457  529417 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 13:29:25.611060  529417 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 13:29:25.631563  529417 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 13:29:25.631668  529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:25.631709  529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-325510 minikube.k8s.io/updated_at=2025_01_27T13_29_25_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=0d71ce9b1959d04f0d9fa7dbc5639a49619ad89b minikube.k8s.io/name=default-k8s-diff-port-325510 minikube.k8s.io/primary=true
	I0127 13:29:25.654141  529417 ops.go:34] apiserver oom_adj: -16
	I0127 13:29:25.885770  529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:26.386140  529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:26.885887  529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:27.386520  529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:27.886746  529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:28.386093  529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:28.523381  529417 kubeadm.go:1113] duration metric: took 2.89179334s to wait for elevateKubeSystemPrivileges
	I0127 13:29:28.523431  529417 kubeadm.go:394] duration metric: took 4m34.628614328s to StartCluster
	I0127 13:29:28.523462  529417 settings.go:142] acquiring lock: {Name:mk070ebf22d35da2704f00750921836dbd2cd121 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:29:28.523566  529417 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20317-466901/kubeconfig
	I0127 13:29:28.526181  529417 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-466901/kubeconfig: {Name:mkc116eec378af43ea8fefe45e11af3e19be85bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:29:28.526636  529417 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 13:29:28.526773  529417 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 13:29:28.526897  529417 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-325510"
	I0127 13:29:28.526920  529417 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-325510"
	I0127 13:29:28.526920  529417 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-325510"
	W0127 13:29:28.526930  529417 addons.go:247] addon storage-provisioner should already be in state true
	I0127 13:29:28.526943  529417 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-325510"
	I0127 13:29:28.526965  529417 host.go:66] Checking if "default-k8s-diff-port-325510" exists ...
	I0127 13:29:28.527036  529417 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-325510"
	I0127 13:29:28.527054  529417 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-325510"
	W0127 13:29:28.527061  529417 addons.go:247] addon dashboard should already be in state true
	I0127 13:29:28.527086  529417 host.go:66] Checking if "default-k8s-diff-port-325510" exists ...
	I0127 13:29:28.527083  529417 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-325510"
	I0127 13:29:28.527117  529417 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-325510"
	W0127 13:29:28.527128  529417 addons.go:247] addon metrics-server should already be in state true
	I0127 13:29:28.527164  529417 host.go:66] Checking if "default-k8s-diff-port-325510" exists ...
	I0127 13:29:28.527436  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:28.527441  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:28.526898  529417 config.go:182] Loaded profile config "default-k8s-diff-port-325510": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 13:29:28.527475  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:28.527490  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:28.527619  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:28.527655  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:28.527667  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:28.527700  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:28.528609  529417 out.go:177] * Verifying Kubernetes components...
	I0127 13:29:28.530189  529417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:29:28.546697  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35953
	I0127 13:29:28.547331  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:28.547485  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43545
	I0127 13:29:28.547528  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46213
	I0127 13:29:28.547893  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:28.548297  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:28.548482  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:29:28.548497  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:28.548832  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:28.549020  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetState
	I0127 13:29:28.549338  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:29:28.549354  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:28.549743  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35541
	I0127 13:29:28.549980  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:28.550227  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:29:28.550241  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:28.550306  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:28.550880  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:28.550926  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:28.551223  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:28.551394  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:29:28.551416  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:28.551971  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:28.552001  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:28.552189  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:28.552980  529417 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-325510"
	W0127 13:29:28.553005  529417 addons.go:247] addon default-storageclass should already be in state true
	I0127 13:29:28.553038  529417 host.go:66] Checking if "default-k8s-diff-port-325510" exists ...
	I0127 13:29:28.553380  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:28.553426  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:28.555977  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:28.556013  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:28.572312  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32967
	I0127 13:29:28.573004  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:28.573598  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:29:28.573617  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:28.573988  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:28.574040  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44851
	I0127 13:29:28.574171  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetState
	I0127 13:29:28.574508  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43399
	I0127 13:29:28.575096  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:28.575836  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:28.576253  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .DriverName
	I0127 13:29:28.576355  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:29:28.576375  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:28.577245  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:28.577419  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetState
	I0127 13:29:28.579103  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .DriverName
	I0127 13:29:28.579756  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:29:28.579779  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:28.580518  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42715
	I0127 13:29:28.580886  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:28.581173  529417 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 13:29:28.581406  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:29:28.581423  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:28.581695  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:28.581855  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetState
	I0127 13:29:28.582619  529417 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:29:28.583309  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .DriverName
	I0127 13:29:28.583662  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:28.584326  529417 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:29:28.584346  529417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 13:29:28.584368  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHHostname
	I0127 13:29:28.587322  529417 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 13:29:28.587999  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:28.588047  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:28.591379  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHPort
	I0127 13:29:28.591427  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:29:28.591456  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6c", ip: ""} in network mk-default-k8s-diff-port-325510: {Iface:virbr3 ExpiryTime:2025-01-27 14:24:40 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6c Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-325510 Clientid:01:52:54:00:c4:f9:6c}
	I0127 13:29:28.591496  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined IP address 192.168.61.7 and MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:29:28.591585  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHKeyPath
	I0127 13:29:28.591752  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHUsername
	I0127 13:29:28.591911  529417 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/default-k8s-diff-port-325510/id_rsa Username:docker}
	I0127 13:29:28.592584  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 13:29:28.592601  529417 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 13:29:28.592621  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHHostname
	I0127 13:29:28.593660  529417 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 13:29:26.212209  531586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:29:26.236202  531586 api_server.go:72] duration metric: took 1.024459251s to wait for apiserver process to appear ...
	I0127 13:29:26.236238  531586 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:29:26.236266  531586 api_server.go:253] Checking apiserver healthz at https://192.168.72.46:8443/healthz ...
	I0127 13:29:26.236911  531586 api_server.go:269] stopped: https://192.168.72.46:8443/healthz: Get "https://192.168.72.46:8443/healthz": dial tcp 192.168.72.46:8443: connect: connection refused
	I0127 13:29:26.737118  531586 api_server.go:253] Checking apiserver healthz at https://192.168.72.46:8443/healthz ...
	I0127 13:29:29.390944  531586 api_server.go:279] https://192.168.72.46:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 13:29:29.390990  531586 api_server.go:103] status: https://192.168.72.46:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 13:29:29.391010  531586 api_server.go:253] Checking apiserver healthz at https://192.168.72.46:8443/healthz ...
	I0127 13:29:29.446439  531586 api_server.go:279] https://192.168.72.46:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 13:29:29.446477  531586 api_server.go:103] status: https://192.168.72.46:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 13:29:29.737006  531586 api_server.go:253] Checking apiserver healthz at https://192.168.72.46:8443/healthz ...
	I0127 13:29:29.743881  531586 api_server.go:279] https://192.168.72.46:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:29:29.743915  531586 api_server.go:103] status: https://192.168.72.46:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:29:30.237168  531586 api_server.go:253] Checking apiserver healthz at https://192.168.72.46:8443/healthz ...
	I0127 13:29:30.251557  531586 api_server.go:279] https://192.168.72.46:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:29:30.251594  531586 api_server.go:103] status: https://192.168.72.46:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:29:30.737227  531586 api_server.go:253] Checking apiserver healthz at https://192.168.72.46:8443/healthz ...
	I0127 13:29:30.744425  531586 api_server.go:279] https://192.168.72.46:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:29:30.744461  531586 api_server.go:103] status: https://192.168.72.46:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:29:31.237274  531586 api_server.go:253] Checking apiserver healthz at https://192.168.72.46:8443/healthz ...
	I0127 13:29:31.244159  531586 api_server.go:279] https://192.168.72.46:8443/healthz returned 200:
	ok
	I0127 13:29:31.252139  531586 api_server.go:141] control plane version: v1.32.1
	I0127 13:29:31.252182  531586 api_server.go:131] duration metric: took 5.015933408s to wait for apiserver health ...
	I0127 13:29:31.252194  531586 cni.go:84] Creating CNI manager for ""
	I0127 13:29:31.252203  531586 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 13:29:31.253925  531586 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 13:29:31.255434  531586 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 13:29:31.267804  531586 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 13:29:31.293560  531586 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:29:31.313542  531586 system_pods.go:59] 8 kube-system pods found
	I0127 13:29:31.313590  531586 system_pods.go:61] "coredns-668d6bf9bc-xvbfh" [0d7c4469-d90e-4487-8433-1167183525e3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 13:29:31.313601  531586 system_pods.go:61] "etcd-newest-cni-296225" [97ed55b3-82a8-4ecf-a721-26a592f2c8cd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 13:29:31.313612  531586 system_pods.go:61] "kube-apiserver-newest-cni-296225" [d31606a7-2b78-4859-80a7-35b783b0a444] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 13:29:31.313621  531586 system_pods.go:61] "kube-controller-manager-newest-cni-296225" [4d6c4da8-a13a-44c2-a877-13b9453142a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 13:29:31.313631  531586 system_pods.go:61] "kube-proxy-dzvvc" [1ec15899-c7eb-436d-9e74-fadde7ecacb8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0127 13:29:31.313640  531586 system_pods.go:61] "kube-scheduler-newest-cni-296225" [2c230f78-68ac-4abb-9cdd-5cf666793981] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 13:29:31.313655  531586 system_pods.go:61] "metrics-server-f79f97bbb-2pv7p" [1246f427-ed62-4202-8170-5ae96be7ccf5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 13:29:31.313671  531586 system_pods.go:61] "storage-provisioner" [7b83dbf7-d497-42bb-9489-614ae5ba76fa] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 13:29:31.313680  531586 system_pods.go:74] duration metric: took 20.080673ms to wait for pod list to return data ...
	I0127 13:29:31.313709  531586 node_conditions.go:102] verifying NodePressure condition ...
	I0127 13:29:31.321205  531586 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 13:29:31.321236  531586 node_conditions.go:123] node cpu capacity is 2
	I0127 13:29:31.321251  531586 node_conditions.go:105] duration metric: took 7.532371ms to run NodePressure ...
	I0127 13:29:31.321276  531586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:29:31.758136  531586 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 13:29:31.783447  531586 ops.go:34] apiserver oom_adj: -16
	I0127 13:29:31.783539  531586 kubeadm.go:597] duration metric: took 8.606153189s to restartPrimaryControlPlane
	I0127 13:29:31.783582  531586 kubeadm.go:394] duration metric: took 8.700235213s to StartCluster
	I0127 13:29:31.783614  531586 settings.go:142] acquiring lock: {Name:mk070ebf22d35da2704f00750921836dbd2cd121 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:29:31.783739  531586 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20317-466901/kubeconfig
	I0127 13:29:31.786536  531586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-466901/kubeconfig: {Name:mkc116eec378af43ea8fefe45e11af3e19be85bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:29:31.786926  531586 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.46 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 13:29:31.787022  531586 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 13:29:31.787188  531586 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-296225"
	I0127 13:29:31.787308  531586 config.go:182] Loaded profile config "newest-cni-296225": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 13:29:31.787320  531586 addons.go:69] Setting metrics-server=true in profile "newest-cni-296225"
	I0127 13:29:31.787353  531586 addons.go:238] Setting addon metrics-server=true in "newest-cni-296225"
	W0127 13:29:31.787367  531586 addons.go:247] addon metrics-server should already be in state true
	I0127 13:29:31.787318  531586 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-296225"
	W0127 13:29:31.787388  531586 addons.go:247] addon storage-provisioner should already be in state true
	I0127 13:29:31.787413  531586 host.go:66] Checking if "newest-cni-296225" exists ...
	I0127 13:29:31.787446  531586 host.go:66] Checking if "newest-cni-296225" exists ...
	I0127 13:29:31.787286  531586 addons.go:69] Setting dashboard=true in profile "newest-cni-296225"
	I0127 13:29:31.787526  531586 addons.go:238] Setting addon dashboard=true in "newest-cni-296225"
	W0127 13:29:31.787557  531586 addons.go:247] addon dashboard should already be in state true
	I0127 13:29:31.787597  531586 host.go:66] Checking if "newest-cni-296225" exists ...
	I0127 13:29:31.787246  531586 addons.go:69] Setting default-storageclass=true in profile "newest-cni-296225"
	I0127 13:29:31.787654  531586 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-296225"
	I0127 13:29:31.787886  531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:31.787922  531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:31.787946  531586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:31.787971  531586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:31.788040  531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:31.788067  531586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:31.788279  531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:31.788348  531586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:31.791198  531586 out.go:177] * Verifying Kubernetes components...
	I0127 13:29:31.792729  531586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:29:31.809862  531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43917
	I0127 13:29:31.810576  531586 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:31.810735  531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43481
	I0127 13:29:31.811453  531586 main.go:141] libmachine: Using API Version  1
	I0127 13:29:31.811479  531586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:31.811565  531586 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:31.812009  531586 main.go:141] libmachine: Using API Version  1
	I0127 13:29:31.812033  531586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:31.812507  531586 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:31.814254  531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43261
	I0127 13:29:31.814774  531586 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:31.815750  531586 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:31.816710  531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:31.816754  531586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:31.817133  531586 main.go:141] libmachine: Using API Version  1
	I0127 13:29:31.817157  531586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:31.817572  531586 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:31.818143  531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:31.818200  531586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:31.819519  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetState
	I0127 13:29:31.824362  531586 addons.go:238] Setting addon default-storageclass=true in "newest-cni-296225"
	W0127 13:29:31.824386  531586 addons.go:247] addon default-storageclass should already be in state true
	I0127 13:29:31.824421  531586 host.go:66] Checking if "newest-cni-296225" exists ...
	I0127 13:29:31.824804  531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:31.824849  531586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:31.835403  531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33125
	I0127 13:29:31.836274  531586 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:31.836962  531586 main.go:141] libmachine: Using API Version  1
	I0127 13:29:31.836997  531586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:31.837484  531586 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:31.838061  531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:31.838106  531586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:31.839703  531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37671
	I0127 13:29:31.844903  531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36575
	I0127 13:29:31.850434  531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38467
	I0127 13:29:31.864579  531586 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:31.864731  531586 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:31.864805  531586 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:31.865332  531586 main.go:141] libmachine: Using API Version  1
	I0127 13:29:31.865353  531586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:31.865507  531586 main.go:141] libmachine: Using API Version  1
	I0127 13:29:31.865520  531586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:31.865755  531586 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:31.865888  531586 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:31.866153  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetState
	I0127 13:29:31.866263  531586 main.go:141] libmachine: Using API Version  1
	I0127 13:29:31.866280  531586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:31.866349  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetState
	I0127 13:29:31.866765  531586 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:31.867410  531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:31.867459  531586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:31.869030  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:29:31.870746  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:29:31.871229  531586 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:29:31.872679  531586 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 13:29:31.872852  531586 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:29:31.872877  531586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 13:29:31.872899  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:31.874840  531586 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 13:29:31.874867  531586 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 13:29:31.874889  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:31.879359  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:31.879992  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:31.880845  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:31.880876  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:31.880911  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:31.880935  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:31.881182  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
	I0127 13:29:31.881276  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
	I0127 13:29:31.881374  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:31.881423  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:31.881494  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
	I0127 13:29:31.881545  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
	I0127 13:29:31.881692  531586 sshutil.go:53] new ssh client: &{IP:192.168.72.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa Username:docker}
	I0127 13:29:31.881713  531586 sshutil.go:53] new ssh client: &{IP:192.168.72.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa Username:docker}
	I0127 13:29:31.890590  531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42247
	I0127 13:29:31.891311  531586 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:31.891961  531586 main.go:141] libmachine: Using API Version  1
	I0127 13:29:31.891983  531586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:31.892382  531586 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:31.892632  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetState
	I0127 13:29:31.894810  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:29:31.895223  531586 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 13:29:31.895240  531586 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 13:29:31.895450  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:31.895697  531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43877
	I0127 13:29:31.896698  531586 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:31.897633  531586 main.go:141] libmachine: Using API Version  1
	I0127 13:29:31.897658  531586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:31.898129  531586 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:31.898280  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetState
	I0127 13:29:31.899110  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:31.899759  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:31.899782  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:31.899962  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
	I0127 13:29:31.900155  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:31.900337  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
	I0127 13:29:31.900466  531586 sshutil.go:53] new ssh client: &{IP:192.168.72.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa Username:docker}
	I0127 13:29:31.904472  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:29:31.907054  531586 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 13:29:31.908332  531586 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 13:29:28.595128  529417 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 13:29:28.595147  529417 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 13:29:28.595179  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHHostname
	I0127 13:29:28.596235  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:29:28.597222  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6c", ip: ""} in network mk-default-k8s-diff-port-325510: {Iface:virbr3 ExpiryTime:2025-01-27 14:24:40 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6c Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-325510 Clientid:01:52:54:00:c4:f9:6c}
	I0127 13:29:28.597304  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined IP address 192.168.61.7 and MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:29:28.597628  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHPort
	I0127 13:29:28.597788  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHKeyPath
	I0127 13:29:28.597943  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHUsername
	I0127 13:29:28.598078  529417 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/default-k8s-diff-port-325510/id_rsa Username:docker}
	I0127 13:29:28.599130  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:29:28.599670  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6c", ip: ""} in network mk-default-k8s-diff-port-325510: {Iface:virbr3 ExpiryTime:2025-01-27 14:24:40 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6c Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-325510 Clientid:01:52:54:00:c4:f9:6c}
	I0127 13:29:28.599694  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined IP address 192.168.61.7 and MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:29:28.599880  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHPort
	I0127 13:29:28.600049  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHKeyPath
	I0127 13:29:28.600195  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHUsername
	I0127 13:29:28.600327  529417 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/default-k8s-diff-port-325510/id_rsa Username:docker}
	I0127 13:29:28.610825  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39231
	I0127 13:29:28.611379  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:28.611919  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:29:28.611939  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:28.612288  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:28.612480  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetState
	I0127 13:29:28.614326  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .DriverName
	I0127 13:29:28.614636  529417 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 13:29:28.614668  529417 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 13:29:28.614688  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHHostname
	I0127 13:29:28.618088  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:29:28.618805  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6c", ip: ""} in network mk-default-k8s-diff-port-325510: {Iface:virbr3 ExpiryTime:2025-01-27 14:24:40 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6c Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-325510 Clientid:01:52:54:00:c4:f9:6c}
	I0127 13:29:28.618958  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined IP address 192.168.61.7 and MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:29:28.619294  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHPort
	I0127 13:29:28.619517  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHKeyPath
	I0127 13:29:28.619738  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHUsername
	I0127 13:29:28.619953  529417 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/default-k8s-diff-port-325510/id_rsa Username:docker}
	I0127 13:29:28.750007  529417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:29:28.770798  529417 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-325510" to be "Ready" ...
	I0127 13:29:28.794753  529417 node_ready.go:49] node "default-k8s-diff-port-325510" has status "Ready":"True"
	I0127 13:29:28.794783  529417 node_ready.go:38] duration metric: took 23.945006ms for node "default-k8s-diff-port-325510" to be "Ready" ...
	I0127 13:29:28.794796  529417 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:29:28.801618  529417 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:28.841055  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 13:29:28.841089  529417 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 13:29:28.865445  529417 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 13:29:28.865479  529417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 13:29:28.870120  529417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:29:28.887649  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 13:29:28.887691  529417 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 13:29:28.908488  529417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 13:29:28.926717  529417 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 13:29:28.926752  529417 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 13:29:28.949234  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 13:29:28.949269  529417 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 13:29:28.983403  529417 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:29:28.983438  529417 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 13:29:29.010532  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 13:29:29.010567  529417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 13:29:29.085215  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 13:29:29.085250  529417 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 13:29:29.085479  529417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:29:29.180902  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 13:29:29.180935  529417 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 13:29:29.239792  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 13:29:29.239830  529417 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 13:29:29.350534  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 13:29:29.350566  529417 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 13:29:29.463271  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:29:29.463315  529417 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 13:29:29.551176  529417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:29:30.055621  529417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.147081618s)
	I0127 13:29:30.055704  529417 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:30.055723  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
	I0127 13:29:30.056191  529417 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:30.056215  529417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:30.056226  529417 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:30.056255  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | Closing plugin on server side
	I0127 13:29:30.056323  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
	I0127 13:29:30.056341  529417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.18618522s)
	I0127 13:29:30.056436  529417 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:30.056465  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
	I0127 13:29:30.056627  529417 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:30.056649  529417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:30.056963  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | Closing plugin on server side
	I0127 13:29:30.058774  529417 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:30.058792  529417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:30.058808  529417 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:30.058817  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
	I0127 13:29:30.059068  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | Closing plugin on server side
	I0127 13:29:30.059083  529417 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:30.059098  529417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:30.083977  529417 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:30.084003  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
	I0127 13:29:30.084571  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | Closing plugin on server side
	I0127 13:29:30.084583  529417 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:30.084595  529417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:30.830919  529417 pod_ready.go:103] pod "etcd-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:30.961132  529417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.875594685s)
	I0127 13:29:30.961202  529417 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:30.961219  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
	I0127 13:29:30.963600  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | Closing plugin on server side
	I0127 13:29:30.963608  529417 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:30.963645  529417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:30.963654  529417 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:30.963662  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
	I0127 13:29:30.964368  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | Closing plugin on server side
	I0127 13:29:30.964392  529417 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:30.964451  529417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:30.964463  529417 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-325510"
	I0127 13:29:32.478187  529417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.926948394s)
	I0127 13:29:32.478257  529417 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:32.478272  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
	I0127 13:29:32.478650  529417 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:32.478671  529417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:32.478683  529417 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:32.478693  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
	I0127 13:29:32.479015  529417 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:32.479033  529417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:32.482147  529417 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-325510 addons enable metrics-server
	
	I0127 13:29:32.483736  529417 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 13:29:32.484840  529417 addons.go:514] duration metric: took 3.958103252s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 13:29:31.909581  531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 13:29:31.909609  531586 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 13:29:31.909639  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:31.913216  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:31.913664  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:31.913695  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:31.913996  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
	I0127 13:29:31.914211  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:31.914377  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
	I0127 13:29:31.914514  531586 sshutil.go:53] new ssh client: &{IP:192.168.72.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa Username:docker}
	I0127 13:29:32.089563  531586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:29:32.127765  531586 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:29:32.127896  531586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:29:32.149480  531586 api_server.go:72] duration metric: took 362.501205ms to wait for apiserver process to appear ...
	I0127 13:29:32.149531  531586 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:29:32.149576  531586 api_server.go:253] Checking apiserver healthz at https://192.168.72.46:8443/healthz ...
	I0127 13:29:32.170573  531586 api_server.go:279] https://192.168.72.46:8443/healthz returned 200:
	ok
	I0127 13:29:32.171739  531586 api_server.go:141] control plane version: v1.32.1
	I0127 13:29:32.171771  531586 api_server.go:131] duration metric: took 22.230634ms to wait for apiserver health ...
	I0127 13:29:32.171784  531586 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:29:32.186307  531586 system_pods.go:59] 8 kube-system pods found
	I0127 13:29:32.186342  531586 system_pods.go:61] "coredns-668d6bf9bc-xvbfh" [0d7c4469-d90e-4487-8433-1167183525e3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 13:29:32.186349  531586 system_pods.go:61] "etcd-newest-cni-296225" [97ed55b3-82a8-4ecf-a721-26a592f2c8cd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 13:29:32.186360  531586 system_pods.go:61] "kube-apiserver-newest-cni-296225" [d31606a7-2b78-4859-80a7-35b783b0a444] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 13:29:32.186368  531586 system_pods.go:61] "kube-controller-manager-newest-cni-296225" [4d6c4da8-a13a-44c2-a877-13b9453142a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 13:29:32.186373  531586 system_pods.go:61] "kube-proxy-dzvvc" [1ec15899-c7eb-436d-9e74-fadde7ecacb8] Running
	I0127 13:29:32.186380  531586 system_pods.go:61] "kube-scheduler-newest-cni-296225" [2c230f78-68ac-4abb-9cdd-5cf666793981] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 13:29:32.186388  531586 system_pods.go:61] "metrics-server-f79f97bbb-2pv7p" [1246f427-ed62-4202-8170-5ae96be7ccf5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 13:29:32.186393  531586 system_pods.go:61] "storage-provisioner" [7b83dbf7-d497-42bb-9489-614ae5ba76fa] Running
	I0127 13:29:32.186408  531586 system_pods.go:74] duration metric: took 14.616708ms to wait for pod list to return data ...
	I0127 13:29:32.186420  531586 default_sa.go:34] waiting for default service account to be created ...
	I0127 13:29:32.194387  531586 default_sa.go:45] found service account: "default"
	I0127 13:29:32.194429  531586 default_sa.go:55] duration metric: took 7.999321ms for default service account to be created ...
	I0127 13:29:32.194447  531586 kubeadm.go:582] duration metric: took 407.475818ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 13:29:32.194469  531586 node_conditions.go:102] verifying NodePressure condition ...
	I0127 13:29:32.215128  531586 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 13:29:32.215228  531586 node_conditions.go:123] node cpu capacity is 2
	I0127 13:29:32.215257  531586 node_conditions.go:105] duration metric: took 20.782574ms to run NodePressure ...
	I0127 13:29:32.215325  531586 start.go:241] waiting for startup goroutines ...
	I0127 13:29:32.224708  531586 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 13:29:32.224738  531586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 13:29:32.233504  531586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 13:29:32.295258  531586 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 13:29:32.295311  531586 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 13:29:32.340500  531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 13:29:32.340623  531586 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 13:29:32.552816  531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 13:29:32.552969  531586 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 13:29:32.615247  531586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:29:32.615684  531586 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:29:32.615709  531586 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 13:29:32.772893  531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 13:29:32.772938  531586 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 13:29:32.831244  531586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:29:32.939523  531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 13:29:32.939558  531586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 13:29:33.121982  531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 13:29:33.122026  531586 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 13:29:33.248581  531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 13:29:33.248619  531586 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 13:29:33.339337  531586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.105786367s)
	I0127 13:29:33.339401  531586 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:33.339413  531586 main.go:141] libmachine: (newest-cni-296225) Calling .Close
	I0127 13:29:33.341380  531586 main.go:141] libmachine: (newest-cni-296225) DBG | Closing plugin on server side
	I0127 13:29:33.341463  531586 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:33.341484  531586 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:33.341498  531586 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:33.341511  531586 main.go:141] libmachine: (newest-cni-296225) Calling .Close
	I0127 13:29:33.342973  531586 main.go:141] libmachine: (newest-cni-296225) DBG | Closing plugin on server side
	I0127 13:29:33.342984  531586 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:33.342995  531586 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:33.350366  531586 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:33.350388  531586 main.go:141] libmachine: (newest-cni-296225) Calling .Close
	I0127 13:29:33.350671  531586 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:33.350685  531586 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:33.367462  531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 13:29:33.367490  531586 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 13:29:33.428952  531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 13:29:33.428989  531586 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 13:29:33.512094  531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:29:33.512127  531586 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 13:29:33.585612  531586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:29:34.628686  531586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.013367863s)
	I0127 13:29:34.628749  531586 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:34.628761  531586 main.go:141] libmachine: (newest-cni-296225) Calling .Close
	I0127 13:29:34.629106  531586 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:34.629133  531586 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:34.629143  531586 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:34.629153  531586 main.go:141] libmachine: (newest-cni-296225) Calling .Close
	I0127 13:29:34.629394  531586 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:34.629407  531586 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:34.834013  531586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.002708663s)
	I0127 13:29:34.834087  531586 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:34.834105  531586 main.go:141] libmachine: (newest-cni-296225) Calling .Close
	I0127 13:29:34.834399  531586 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:34.834418  531586 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:34.834427  531586 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:34.834435  531586 main.go:141] libmachine: (newest-cni-296225) Calling .Close
	I0127 13:29:34.834714  531586 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:34.834733  531586 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:34.834746  531586 addons.go:479] Verifying addon metrics-server=true in "newest-cni-296225"
	I0127 13:29:35.573250  531586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.987594335s)
	I0127 13:29:35.573316  531586 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:35.573332  531586 main.go:141] libmachine: (newest-cni-296225) Calling .Close
	I0127 13:29:35.573696  531586 main.go:141] libmachine: (newest-cni-296225) DBG | Closing plugin on server side
	I0127 13:29:35.573748  531586 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:35.573762  531586 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:35.573820  531586 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:35.573835  531586 main.go:141] libmachine: (newest-cni-296225) Calling .Close
	I0127 13:29:35.574254  531586 main.go:141] libmachine: (newest-cni-296225) DBG | Closing plugin on server side
	I0127 13:29:35.575985  531586 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:35.576005  531586 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:35.577914  531586 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-296225 addons enable metrics-server
	
	I0127 13:29:35.579611  531586 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0127 13:29:35.580983  531586 addons.go:514] duration metric: took 3.79397273s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0127 13:29:35.581031  531586 start.go:246] waiting for cluster config update ...
	I0127 13:29:35.581050  531586 start.go:255] writing updated cluster config ...
	I0127 13:29:35.581368  531586 ssh_runner.go:195] Run: rm -f paused
	I0127 13:29:35.638909  531586 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 13:29:35.640552  531586 out.go:177] * Done! kubectl is now configured to use "newest-cni-296225" cluster and "default" namespace by default
	I0127 13:29:33.314653  529417 pod_ready.go:103] pod "etcd-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:34.308087  529417 pod_ready.go:93] pod "etcd-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"True"
	I0127 13:29:34.308114  529417 pod_ready.go:82] duration metric: took 5.506466228s for pod "etcd-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:34.308126  529417 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:34.314009  529417 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"True"
	I0127 13:29:34.314033  529417 pod_ready.go:82] duration metric: took 5.900062ms for pod "kube-apiserver-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:34.314044  529417 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:34.321801  529417 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"True"
	I0127 13:29:34.321823  529417 pod_ready.go:82] duration metric: took 7.77255ms for pod "kube-controller-manager-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:34.321836  529417 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:36.328661  529417 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:38.833405  529417 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:39.331942  529417 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"True"
	I0127 13:29:39.331971  529417 pod_ready.go:82] duration metric: took 5.010119744s for pod "kube-scheduler-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:39.331983  529417 pod_ready.go:39] duration metric: took 10.537174991s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:29:39.332004  529417 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:29:39.332061  529417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:29:39.364826  529417 api_server.go:72] duration metric: took 10.838138782s to wait for apiserver process to appear ...
	I0127 13:29:39.364856  529417 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:29:39.364880  529417 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0127 13:29:39.395339  529417 api_server.go:279] https://192.168.61.7:8444/healthz returned 200:
	ok
	I0127 13:29:39.403463  529417 api_server.go:141] control plane version: v1.32.1
	I0127 13:29:39.403502  529417 api_server.go:131] duration metric: took 38.63787ms to wait for apiserver health ...
	I0127 13:29:39.403515  529417 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:29:39.428974  529417 system_pods.go:59] 9 kube-system pods found
	I0127 13:29:39.429008  529417 system_pods.go:61] "coredns-668d6bf9bc-mgxmm" [15f65844-c002-4253-9f43-609e6d3d86c0] Running
	I0127 13:29:39.429013  529417 system_pods.go:61] "coredns-668d6bf9bc-rlvv2" [b116f02c-d30f-4869-bef1-55722f0f1a58] Running
	I0127 13:29:39.429016  529417 system_pods.go:61] "etcd-default-k8s-diff-port-325510" [88fd4825-b74c-43e0-8a3e-fd60bb409b76] Running
	I0127 13:29:39.429021  529417 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-325510" [4eeff905-b36f-4be8-ac24-77c8421495c4] Running
	I0127 13:29:39.429024  529417 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-325510" [07956b85-b521-44cc-be77-675703803a17] Running
	I0127 13:29:39.429027  529417 system_pods.go:61] "kube-proxy-gb24h" [d0d50b9f-b02f-49dd-9a7a-78e202ce247a] Running
	I0127 13:29:39.429031  529417 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-325510" [a7c2c0c5-c386-454d-9542-852b02901060] Running
	I0127 13:29:39.429037  529417 system_pods.go:61] "metrics-server-f79f97bbb-vtvnn" [07e0c335-6a2b-4ef3-b153-3689cdb7ccaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 13:29:39.429041  529417 system_pods.go:61] "storage-provisioner" [7b76ca76-2bfc-44c4-bfc3-5ac3f4cde72b] Running
	I0127 13:29:39.429048  529417 system_pods.go:74] duration metric: took 25.526569ms to wait for pod list to return data ...
	I0127 13:29:39.429056  529417 default_sa.go:34] waiting for default service account to be created ...
	I0127 13:29:39.449041  529417 default_sa.go:45] found service account: "default"
	I0127 13:29:39.449083  529417 default_sa.go:55] duration metric: took 20.019081ms for default service account to be created ...
	I0127 13:29:39.449098  529417 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 13:29:39.468326  529417 system_pods.go:87] 9 kube-system pods found
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	5d496ed292955       523cad1a4df73       28 seconds ago      Exited              dashboard-metrics-scraper   9                   cbc53823f9707       dashboard-metrics-scraper-86c6bf9756-lvdlj
	0f62aa9b27a1a       07655ddf2eebe       21 minutes ago      Running             kubernetes-dashboard        0                   e82a817a71c1d       kubernetes-dashboard-7779f9b69b-tzvnn
	39d6d79d902e3       6e38f40d628db       21 minutes ago      Running             storage-provisioner         0                   bd8016dc85546       storage-provisioner
	d9d2dcc259fe2       c69fa2e9cbf5f       21 minutes ago      Running             coredns                     0                   07ef533a24394       coredns-668d6bf9bc-wf444
	fcdac24e6b66e       c69fa2e9cbf5f       21 minutes ago      Running             coredns                     0                   a8a50f8ee2217       coredns-668d6bf9bc-9h4k2
	3459cfb76f523       e29f9c7391fd9       21 minutes ago      Running             kube-proxy                  0                   fe4a7e8aa1c12       kube-proxy-vp88s
	d291a8fcc7a13       a9e7e6b294baf       22 minutes ago      Running             etcd                        2                   9839336f6800b       etcd-embed-certs-766944
	bf339929be775       019ee182b58e2       22 minutes ago      Running             kube-controller-manager     2                   61c118dc3258c       kube-controller-manager-embed-certs-766944
	0555cf1755bfa       95c0bda56fc4d       22 minutes ago      Running             kube-apiserver              2                   e7c91473efb61       kube-apiserver-embed-certs-766944
	f43c952b31735       2b0d6572d062c       22 minutes ago      Running             kube-scheduler              2                   aea7a28d5a03a       kube-scheduler-embed-certs-766944
	
	
	==> containerd <==
	Jan 27 13:45:11 embed-certs-766944 containerd[561]: time="2025-01-27T13:45:11.658080814Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Jan 27 13:45:11 embed-certs-766944 containerd[561]: time="2025-01-27T13:45:11.660142342Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Jan 27 13:45:11 embed-certs-766944 containerd[561]: time="2025-01-27T13:45:11.660249074Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 27 13:45:29 embed-certs-766944 containerd[561]: time="2025-01-27T13:45:29.652408235Z" level=info msg="CreateContainer within sandbox \"cbc53823f970763b48866f810bd56a7ec4b9ade6c78e1be719593fc063195f9c\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,}"
	Jan 27 13:45:29 embed-certs-766944 containerd[561]: time="2025-01-27T13:45:29.677487154Z" level=info msg="CreateContainer within sandbox \"cbc53823f970763b48866f810bd56a7ec4b9ade6c78e1be719593fc063195f9c\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,} returns container id \"5fba71c415e0fb4b978e44dca8efea4e38972d328d7a879e98572233d741bbd5\""
	Jan 27 13:45:29 embed-certs-766944 containerd[561]: time="2025-01-27T13:45:29.678594588Z" level=info msg="StartContainer for \"5fba71c415e0fb4b978e44dca8efea4e38972d328d7a879e98572233d741bbd5\""
	Jan 27 13:45:29 embed-certs-766944 containerd[561]: time="2025-01-27T13:45:29.766231198Z" level=info msg="StartContainer for \"5fba71c415e0fb4b978e44dca8efea4e38972d328d7a879e98572233d741bbd5\" returns successfully"
	Jan 27 13:45:29 embed-certs-766944 containerd[561]: time="2025-01-27T13:45:29.808939389Z" level=info msg="shim disconnected" id=5fba71c415e0fb4b978e44dca8efea4e38972d328d7a879e98572233d741bbd5 namespace=k8s.io
	Jan 27 13:45:29 embed-certs-766944 containerd[561]: time="2025-01-27T13:45:29.809104439Z" level=warning msg="cleaning up after shim disconnected" id=5fba71c415e0fb4b978e44dca8efea4e38972d328d7a879e98572233d741bbd5 namespace=k8s.io
	Jan 27 13:45:29 embed-certs-766944 containerd[561]: time="2025-01-27T13:45:29.809158366Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 27 13:45:30 embed-certs-766944 containerd[561]: time="2025-01-27T13:45:30.415662587Z" level=info msg="RemoveContainer for \"b6d45d550c7896c70d919c0efbbcd2fe891b2c4158285f50419eab2f5de93060\""
	Jan 27 13:45:30 embed-certs-766944 containerd[561]: time="2025-01-27T13:45:30.422277394Z" level=info msg="RemoveContainer for \"b6d45d550c7896c70d919c0efbbcd2fe891b2c4158285f50419eab2f5de93060\" returns successfully"
	Jan 27 13:50:23 embed-certs-766944 containerd[561]: time="2025-01-27T13:50:23.650482954Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 13:50:23 embed-certs-766944 containerd[561]: time="2025-01-27T13:50:23.676637593Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Jan 27 13:50:23 embed-certs-766944 containerd[561]: time="2025-01-27T13:50:23.679091326Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Jan 27 13:50:23 embed-certs-766944 containerd[561]: time="2025-01-27T13:50:23.679187557Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 27 13:50:30 embed-certs-766944 containerd[561]: time="2025-01-27T13:50:30.650908078Z" level=info msg="CreateContainer within sandbox \"cbc53823f970763b48866f810bd56a7ec4b9ade6c78e1be719593fc063195f9c\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,}"
	Jan 27 13:50:30 embed-certs-766944 containerd[561]: time="2025-01-27T13:50:30.678350675Z" level=info msg="CreateContainer within sandbox \"cbc53823f970763b48866f810bd56a7ec4b9ade6c78e1be719593fc063195f9c\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,} returns container id \"5d496ed2929556e7114ffd177100511adbc104daab5af6b5cb7e2cf61bba780a\""
	Jan 27 13:50:30 embed-certs-766944 containerd[561]: time="2025-01-27T13:50:30.679282630Z" level=info msg="StartContainer for \"5d496ed2929556e7114ffd177100511adbc104daab5af6b5cb7e2cf61bba780a\""
	Jan 27 13:50:30 embed-certs-766944 containerd[561]: time="2025-01-27T13:50:30.766589635Z" level=info msg="StartContainer for \"5d496ed2929556e7114ffd177100511adbc104daab5af6b5cb7e2cf61bba780a\" returns successfully"
	Jan 27 13:50:30 embed-certs-766944 containerd[561]: time="2025-01-27T13:50:30.817443265Z" level=info msg="shim disconnected" id=5d496ed2929556e7114ffd177100511adbc104daab5af6b5cb7e2cf61bba780a namespace=k8s.io
	Jan 27 13:50:30 embed-certs-766944 containerd[561]: time="2025-01-27T13:50:30.817504762Z" level=warning msg="cleaning up after shim disconnected" id=5d496ed2929556e7114ffd177100511adbc104daab5af6b5cb7e2cf61bba780a namespace=k8s.io
	Jan 27 13:50:30 embed-certs-766944 containerd[561]: time="2025-01-27T13:50:30.817514868Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 27 13:50:31 embed-certs-766944 containerd[561]: time="2025-01-27T13:50:31.163786772Z" level=info msg="RemoveContainer for \"5fba71c415e0fb4b978e44dca8efea4e38972d328d7a879e98572233d741bbd5\""
	Jan 27 13:50:31 embed-certs-766944 containerd[561]: time="2025-01-27T13:50:31.171242693Z" level=info msg="RemoveContainer for \"5fba71c415e0fb4b978e44dca8efea4e38972d328d7a879e98572233d741bbd5\" returns successfully"
	
	
	==> coredns [d9d2dcc259fe236d1d4e6632fbc41dff6485bbdacee6e2bfbf2dc90139d0ae6b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [fcdac24e6b66ea31ee9a663931fb8f16785151898e1bac8af61513e5c489264d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-766944
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-766944
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0d71ce9b1959d04f0d9fa7dbc5639a49619ad89b
	                    minikube.k8s.io/name=embed-certs-766944
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T13_29_04_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 13:29:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-766944
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 13:50:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 13:49:09 +0000   Mon, 27 Jan 2025 13:28:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 13:49:09 +0000   Mon, 27 Jan 2025 13:28:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 13:49:09 +0000   Mon, 27 Jan 2025 13:28:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 13:49:09 +0000   Mon, 27 Jan 2025 13:29:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.24
	  Hostname:    embed-certs-766944
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b309610030e943c1a3e57ec2de157f48
	  System UUID:                b3096100-30e9-43c1-a3e5-7ec2de157f48
	  Boot ID:                    c569723a-7b42-46be-a2e8-86419d949924
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-9h4k2                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 coredns-668d6bf9bc-wf444                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-embed-certs-766944                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-embed-certs-766944             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-embed-certs-766944    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-vp88s                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-embed-certs-766944             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-f79f97bbb-27dz9                100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-lvdlj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-tzvnn         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 21m   kube-proxy       
	  Normal  Starting                 21m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m   kubelet          Node embed-certs-766944 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m   kubelet          Node embed-certs-766944 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m   kubelet          Node embed-certs-766944 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m   node-controller  Node embed-certs-766944 event: Registered Node embed-certs-766944 in Controller
	
	
	==> dmesg <==
	[  +0.054561] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043557] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.074571] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.794093] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.619428] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.146347] systemd-fstab-generator[484]: Ignoring "noauto" option for root device
	[  +0.059309] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067423] systemd-fstab-generator[496]: Ignoring "noauto" option for root device
	[  +0.208740] systemd-fstab-generator[510]: Ignoring "noauto" option for root device
	[  +0.140013] systemd-fstab-generator[522]: Ignoring "noauto" option for root device
	[  +0.332310] systemd-fstab-generator[552]: Ignoring "noauto" option for root device
	[  +1.576886] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +2.224379] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +0.896402] kauditd_printk_skb: 225 callbacks suppressed
	[  +5.020621] kauditd_printk_skb: 40 callbacks suppressed
	[ +11.311397] kauditd_printk_skb: 80 callbacks suppressed
	[Jan27 13:28] systemd-fstab-generator[3102]: Ignoring "noauto" option for root device
	[Jan27 13:29] systemd-fstab-generator[3479]: Ignoring "noauto" option for root device
	[  +0.092900] kauditd_printk_skb: 87 callbacks suppressed
	[  +4.892281] systemd-fstab-generator[3573]: Ignoring "noauto" option for root device
	[  +0.195549] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.807295] kauditd_printk_skb: 110 callbacks suppressed
	[  +5.791904] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [d291a8fcc7a13eb646b7b026a5294ef90a8076cee71ded6ea6c370c1cb4d8758] <==
	{"level":"info","ts":"2025-01-27T13:28:59.366417Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-27T13:28:59.366562Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-01-27T13:28:59.368196Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T13:28:59.375813Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T13:28:59.376535Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6c3e0d5efc74209","local-member-id":"602226ed500416f5","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T13:28:59.379417Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T13:28:59.382186Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T13:28:59.383374Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T13:28:59.387097Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.24:2379"}
	{"level":"info","ts":"2025-01-27T13:28:59.387956Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-27T13:29:21.869053Z","caller":"traceutil/trace.go:171","msg":"trace[2038413137] transaction","detail":"{read_only:false; response_revision:501; number_of_response:1; }","duration":"219.331871ms","start":"2025-01-27T13:29:21.638435Z","end":"2025-01-27T13:29:21.857767Z","steps":["trace[2038413137] 'process raft request'  (duration: 219.204333ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T13:29:21.861690Z","caller":"traceutil/trace.go:171","msg":"trace[310309576] linearizableReadLoop","detail":"{readStateIndex:514; appliedIndex:514; }","duration":"194.015035ms","start":"2025-01-27T13:29:21.663904Z","end":"2025-01-27T13:29:21.857919Z","steps":["trace[310309576] 'read index received'  (duration: 194.007853ms)","trace[310309576] 'applied index is now lower than readState.Index'  (duration: 6.149µs)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T13:29:21.882147Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"208.44668ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T13:29:21.882394Z","caller":"traceutil/trace.go:171","msg":"trace[1134572185] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:501; }","duration":"218.494556ms","start":"2025-01-27T13:29:21.663872Z","end":"2025-01-27T13:29:21.882367Z","steps":["trace[1134572185] 'agreement among raft nodes before linearized reading'  (duration: 208.425531ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T13:29:21.987773Z","caller":"traceutil/trace.go:171","msg":"trace[734236126] transaction","detail":"{read_only:false; response_revision:502; number_of_response:1; }","duration":"307.251808ms","start":"2025-01-27T13:29:21.680507Z","end":"2025-01-27T13:29:21.987759Z","steps":["trace[734236126] 'process raft request'  (duration: 301.537294ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T13:29:22.002774Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T13:29:21.680485Z","time spent":"307.328318ms","remote":"127.0.0.1:42034","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":683,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-uqvr7ou63jm7yegcrgwrj44fk4\" mod_revision:399 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-uqvr7ou63jm7yegcrgwrj44fk4\" value_size:610 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-uqvr7ou63jm7yegcrgwrj44fk4\" > >"}
	{"level":"info","ts":"2025-01-27T13:38:59.457753Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":832}
	{"level":"info","ts":"2025-01-27T13:38:59.498300Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":832,"took":"39.764079ms","hash":621796949,"current-db-size-bytes":2961408,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":2961408,"current-db-size-in-use":"3.0 MB"}
	{"level":"info","ts":"2025-01-27T13:38:59.498610Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":621796949,"revision":832,"compact-revision":-1}
	{"level":"info","ts":"2025-01-27T13:43:59.466802Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1082}
	{"level":"info","ts":"2025-01-27T13:43:59.472593Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1082,"took":"4.811195ms","hash":1139096532,"current-db-size-bytes":2961408,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":1757184,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-27T13:43:59.472839Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1139096532,"revision":1082,"compact-revision":832}
	{"level":"info","ts":"2025-01-27T13:48:59.476167Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1332}
	{"level":"info","ts":"2025-01-27T13:48:59.481542Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1332,"took":"4.287605ms","hash":563428222,"current-db-size-bytes":2961408,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":1818624,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-27T13:48:59.481933Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":563428222,"revision":1332,"compact-revision":1082}
	
	
	==> kernel <==
	 13:51:00 up 26 min,  0 users,  load average: 0.28, 0.27, 0.27
	Linux embed-certs-766944 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0555cf1755bfa4ffceceb7eeb78ba66f26a43aea22a51df17ce8fe72b8dbef18] <==
	I0127 13:47:02.159018       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 13:47:02.159121       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 13:49:01.156690       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 13:49:01.157261       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0127 13:49:02.159371       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 13:49:02.159492       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0127 13:49:02.159569       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 13:49:02.159595       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0127 13:49:02.160716       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 13:49:02.160809       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 13:50:02.161083       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 13:50:02.161181       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0127 13:50:02.161683       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 13:50:02.161856       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 13:50:02.162344       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 13:50:02.163683       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [bf339929be77500495f3349658067abd0b3fe3d27315254cf7fa5c05fd585399] <==
	E0127 13:46:07.920563       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:46:08.040257       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 13:46:37.928006       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:46:38.049583       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 13:47:07.935789       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:47:08.060513       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 13:47:37.943200       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:47:38.074895       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 13:48:07.950274       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:48:08.088210       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 13:48:37.957785       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:48:38.098576       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 13:49:07.965060       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:49:08.105875       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 13:49:09.236344       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="embed-certs-766944"
	E0127 13:49:37.972593       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:49:38.115824       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 13:50:07.979175       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:50:08.125245       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 13:50:31.185388       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="557.39µs"
	I0127 13:50:34.868016       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="115.514µs"
	I0127 13:50:37.665927       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="93.003µs"
	E0127 13:50:37.986452       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:50:38.132512       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 13:50:49.667546       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="83.937µs"
	
	
	==> kube-proxy [3459cfb76f5236562d752f9b4be69618f37c5518aa75a122861e218072fb5446] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 13:29:09.597242       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 13:29:09.610563       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.24"]
	E0127 13:29:09.610848       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 13:29:09.732317       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 13:29:09.732422       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 13:29:09.732453       1 server_linux.go:170] "Using iptables Proxier"
	I0127 13:29:09.762321       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 13:29:09.762782       1 server.go:497] "Version info" version="v1.32.1"
	I0127 13:29:09.762795       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 13:29:09.768623       1 config.go:199] "Starting service config controller"
	I0127 13:29:09.768672       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 13:29:09.768707       1 config.go:105] "Starting endpoint slice config controller"
	I0127 13:29:09.768713       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 13:29:09.773494       1 config.go:329] "Starting node config controller"
	I0127 13:29:09.773543       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 13:29:09.885169       1 shared_informer.go:320] Caches are synced for service config
	I0127 13:29:09.885282       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 13:29:09.885732       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f43c952b31735542438f985d8b06e695c27ec42beef039c8c672f0933fa002d7] <==
	W0127 13:29:01.148506       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 13:29:01.151805       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 13:29:01.148636       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0127 13:29:01.152165       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 13:29:01.951023       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 13:29:01.951426       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 13:29:01.966736       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 13:29:01.966803       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0127 13:29:02.018100       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0127 13:29:02.018163       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 13:29:02.054946       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 13:29:02.056507       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 13:29:02.072914       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0127 13:29:02.073011       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 13:29:02.096423       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 13:29:02.096482       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 13:29:02.119574       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0127 13:29:02.119614       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 13:29:02.211207       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0127 13:29:02.211650       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 13:29:02.253593       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0127 13:29:02.254036       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 13:29:02.444112       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 13:29:02.444465       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0127 13:29:04.221294       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 13:49:56 embed-certs-766944 kubelet[3486]: E0127 13:49:56.649552    3486 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-27dz9" podUID="9f604bd3-a953-4a12-b1bc-48e4e4c8bb4d"
	Jan 27 13:50:01 embed-certs-766944 kubelet[3486]: I0127 13:50:01.648865    3486 scope.go:117] "RemoveContainer" containerID="5fba71c415e0fb4b978e44dca8efea4e38972d328d7a879e98572233d741bbd5"
	Jan 27 13:50:01 embed-certs-766944 kubelet[3486]: E0127 13:50:01.649151    3486 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-lvdlj_kubernetes-dashboard(e5af964a-f170-4c37-80c3-d3cbd0373fe7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-lvdlj" podUID="e5af964a-f170-4c37-80c3-d3cbd0373fe7"
	Jan 27 13:50:03 embed-certs-766944 kubelet[3486]: E0127 13:50:03.664435    3486 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 13:50:03 embed-certs-766944 kubelet[3486]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 13:50:03 embed-certs-766944 kubelet[3486]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 13:50:03 embed-certs-766944 kubelet[3486]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 13:50:03 embed-certs-766944 kubelet[3486]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 13:50:08 embed-certs-766944 kubelet[3486]: E0127 13:50:08.648654    3486 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-27dz9" podUID="9f604bd3-a953-4a12-b1bc-48e4e4c8bb4d"
	Jan 27 13:50:15 embed-certs-766944 kubelet[3486]: I0127 13:50:15.648174    3486 scope.go:117] "RemoveContainer" containerID="5fba71c415e0fb4b978e44dca8efea4e38972d328d7a879e98572233d741bbd5"
	Jan 27 13:50:15 embed-certs-766944 kubelet[3486]: E0127 13:50:15.648714    3486 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-lvdlj_kubernetes-dashboard(e5af964a-f170-4c37-80c3-d3cbd0373fe7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-lvdlj" podUID="e5af964a-f170-4c37-80c3-d3cbd0373fe7"
	Jan 27 13:50:23 embed-certs-766944 kubelet[3486]: E0127 13:50:23.679580    3486 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 27 13:50:23 embed-certs-766944 kubelet[3486]: E0127 13:50:23.679706    3486 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 27 13:50:23 embed-certs-766944 kubelet[3486]: E0127 13:50:23.680249    3486 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2z8h2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation
:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Std
in:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-f79f97bbb-27dz9_kube-system(9f604bd3-a953-4a12-b1bc-48e4e4c8bb4d): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Jan 27 13:50:23 embed-certs-766944 kubelet[3486]: E0127 13:50:23.681943    3486 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-27dz9" podUID="9f604bd3-a953-4a12-b1bc-48e4e4c8bb4d"
	Jan 27 13:50:30 embed-certs-766944 kubelet[3486]: I0127 13:50:30.647684    3486 scope.go:117] "RemoveContainer" containerID="5fba71c415e0fb4b978e44dca8efea4e38972d328d7a879e98572233d741bbd5"
	Jan 27 13:50:31 embed-certs-766944 kubelet[3486]: I0127 13:50:31.161605    3486 scope.go:117] "RemoveContainer" containerID="5fba71c415e0fb4b978e44dca8efea4e38972d328d7a879e98572233d741bbd5"
	Jan 27 13:50:31 embed-certs-766944 kubelet[3486]: I0127 13:50:31.161910    3486 scope.go:117] "RemoveContainer" containerID="5d496ed2929556e7114ffd177100511adbc104daab5af6b5cb7e2cf61bba780a"
	Jan 27 13:50:31 embed-certs-766944 kubelet[3486]: E0127 13:50:31.162161    3486 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-lvdlj_kubernetes-dashboard(e5af964a-f170-4c37-80c3-d3cbd0373fe7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-lvdlj" podUID="e5af964a-f170-4c37-80c3-d3cbd0373fe7"
	Jan 27 13:50:34 embed-certs-766944 kubelet[3486]: I0127 13:50:34.846889    3486 scope.go:117] "RemoveContainer" containerID="5d496ed2929556e7114ffd177100511adbc104daab5af6b5cb7e2cf61bba780a"
	Jan 27 13:50:34 embed-certs-766944 kubelet[3486]: E0127 13:50:34.847266    3486 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-lvdlj_kubernetes-dashboard(e5af964a-f170-4c37-80c3-d3cbd0373fe7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-lvdlj" podUID="e5af964a-f170-4c37-80c3-d3cbd0373fe7"
	Jan 27 13:50:37 embed-certs-766944 kubelet[3486]: E0127 13:50:37.649553    3486 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-27dz9" podUID="9f604bd3-a953-4a12-b1bc-48e4e4c8bb4d"
	Jan 27 13:50:49 embed-certs-766944 kubelet[3486]: E0127 13:50:49.649086    3486 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-27dz9" podUID="9f604bd3-a953-4a12-b1bc-48e4e4c8bb4d"
	Jan 27 13:50:50 embed-certs-766944 kubelet[3486]: I0127 13:50:50.648383    3486 scope.go:117] "RemoveContainer" containerID="5d496ed2929556e7114ffd177100511adbc104daab5af6b5cb7e2cf61bba780a"
	Jan 27 13:50:50 embed-certs-766944 kubelet[3486]: E0127 13:50:50.648567    3486 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-lvdlj_kubernetes-dashboard(e5af964a-f170-4c37-80c3-d3cbd0373fe7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-lvdlj" podUID="e5af964a-f170-4c37-80c3-d3cbd0373fe7"
	
	
	==> kubernetes-dashboard [0f62aa9b27a1a1bce3942eb13d3576a3b6fd9c61e85cb9ed28dd97265719db12] <==
	2025/01/27 13:38:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:39:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:39:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:40:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:40:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:41:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:41:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:42:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:42:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:43:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:43:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:44:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:44:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:45:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:45:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:46:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:46:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:47:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:47:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:48:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:48:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:49:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:49:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:50:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:50:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [39d6d79d902e3ba937d9b1247eb761a09dec7def3e8e69a43b458c7443eb05ff] <==
	I0127 13:29:11.331328       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 13:29:11.385575       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 13:29:11.385653       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 13:29:11.419377       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 13:29:11.422056       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8119342e-d08d-4e96-98e9-b0a504196ee7", APIVersion:"v1", ResourceVersion:"406", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-766944_03efa8b7-2b48-4eaf-85ba-eee959c4f0ba became leader
	I0127 13:29:11.428697       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-766944_03efa8b7-2b48-4eaf-85ba-eee959c4f0ba!
	I0127 13:29:11.549099       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-766944_03efa8b7-2b48-4eaf-85ba-eee959c4f0ba!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-766944 -n embed-certs-766944
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-766944 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-27dz9
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-766944 describe pod metrics-server-f79f97bbb-27dz9
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-766944 describe pod metrics-server-f79f97bbb-27dz9: exit status 1 (68.353462ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-27dz9" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-766944 describe pod metrics-server-f79f97bbb-27dz9: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (1611.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (1642.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-325510 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 13:24:13.655289  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/custom-flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:24:14.937643  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/custom-flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:24:17.499478  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/custom-flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:24:22.620817  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/custom-flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:24:22.782437  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/kindnet-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:24:29.048751  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/calico-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:24:32.862630  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/custom-flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p default-k8s-diff-port-325510 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: signal: killed (27m19.95872151s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-325510] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20317
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20317-466901/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-466901/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "default-k8s-diff-port-325510" primary control-plane node in "default-k8s-diff-port-325510" cluster
	* Restarting existing kvm2 VM for "default-k8s-diff-port-325510" ...
	* Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-325510 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 13:24:13.285953  529417 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:24:13.286094  529417 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:24:13.286105  529417 out.go:358] Setting ErrFile to fd 2...
	I0127 13:24:13.286111  529417 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:24:13.286372  529417 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-466901/.minikube/bin
	I0127 13:24:13.287063  529417 out.go:352] Setting JSON to false
	I0127 13:24:13.288401  529417 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":36350,"bootTime":1737947903,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 13:24:13.288535  529417 start.go:139] virtualization: kvm guest
	I0127 13:24:13.290842  529417 out.go:177] * [default-k8s-diff-port-325510] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 13:24:13.292183  529417 notify.go:220] Checking for updates...
	I0127 13:24:13.292216  529417 out.go:177]   - MINIKUBE_LOCATION=20317
	I0127 13:24:13.293685  529417 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:24:13.294943  529417 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20317-466901/kubeconfig
	I0127 13:24:13.296240  529417 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-466901/.minikube
	I0127 13:24:13.297433  529417 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 13:24:13.298618  529417 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 13:24:13.300175  529417 config.go:182] Loaded profile config "default-k8s-diff-port-325510": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 13:24:13.300569  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:24:13.300615  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:24:13.316180  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35097
	I0127 13:24:13.316848  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:24:13.317473  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:24:13.317498  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:24:13.317980  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:24:13.318183  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .DriverName
	I0127 13:24:13.318442  529417 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 13:24:13.318724  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:24:13.318765  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:24:13.333715  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35815
	I0127 13:24:13.334173  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:24:13.334689  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:24:13.334710  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:24:13.335031  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:24:13.335253  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .DriverName
	I0127 13:24:13.375212  529417 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 13:24:13.376337  529417 start.go:297] selected driver: kvm2
	I0127 13:24:13.376352  529417 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-325510 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8
s-diff-port-325510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequeste
d:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:24:13.376463  529417 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 13:24:13.377191  529417 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:24:13.377263  529417 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20317-466901/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 13:24:13.393011  529417 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 13:24:13.393422  529417 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 13:24:13.393463  529417 cni.go:84] Creating CNI manager for ""
	I0127 13:24:13.393511  529417 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 13:24:13.393551  529417 start.go:340] cluster config:
	{Name:default-k8s-diff-port-325510 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-325510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/j
enkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:24:13.393655  529417 iso.go:125] acquiring lock: {Name:mkcc3db98c9d4661e75c49bd9b203d0232dff8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:24:13.395385  529417 out.go:177] * Starting "default-k8s-diff-port-325510" primary control-plane node in "default-k8s-diff-port-325510" cluster
	I0127 13:24:13.396659  529417 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 13:24:13.396713  529417 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20317-466901/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
	I0127 13:24:13.396728  529417 cache.go:56] Caching tarball of preloaded images
	I0127 13:24:13.396836  529417 preload.go:172] Found /home/jenkins/minikube-integration/20317-466901/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 13:24:13.396857  529417 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
	I0127 13:24:13.396975  529417 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/default-k8s-diff-port-325510/config.json ...
	I0127 13:24:13.397186  529417 start.go:360] acquireMachinesLock for default-k8s-diff-port-325510: {Name:mke115b779db52cb0a5f0a05f83d5bad0a35c561 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 13:24:28.404653  529417 start.go:364] duration metric: took 15.007433619s to acquireMachinesLock for "default-k8s-diff-port-325510"
	I0127 13:24:28.404723  529417 start.go:96] Skipping create...Using existing machine configuration
	I0127 13:24:28.404731  529417 fix.go:54] fixHost starting: 
	I0127 13:24:28.405127  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:24:28.405207  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:24:28.422325  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34333
	I0127 13:24:28.422822  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:24:28.423415  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:24:28.423442  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:24:28.423792  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:24:28.423997  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .DriverName
	I0127 13:24:28.424163  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetState
	I0127 13:24:28.425795  529417 fix.go:112] recreateIfNeeded on default-k8s-diff-port-325510: state=Stopped err=<nil>
	I0127 13:24:28.425824  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .DriverName
	W0127 13:24:28.425992  529417 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 13:24:28.428151  529417 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-325510" ...
	I0127 13:24:28.429490  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Start
	I0127 13:24:28.429688  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) starting domain...
	I0127 13:24:28.429710  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) ensuring networks are active...
	I0127 13:24:28.430547  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Ensuring network default is active
	I0127 13:24:28.430942  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Ensuring network mk-default-k8s-diff-port-325510 is active
	I0127 13:24:28.431446  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) getting domain XML...
	I0127 13:24:28.432400  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) creating domain...
	I0127 13:24:29.721688  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) waiting for IP...
	I0127 13:24:29.722621  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:29.723060  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | unable to find current IP address of domain default-k8s-diff-port-325510 in network mk-default-k8s-diff-port-325510
	I0127 13:24:29.723153  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | I0127 13:24:29.723052  529516 retry.go:31] will retry after 299.442454ms: waiting for domain to come up
	I0127 13:24:30.024984  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:30.025573  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | unable to find current IP address of domain default-k8s-diff-port-325510 in network mk-default-k8s-diff-port-325510
	I0127 13:24:30.025601  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | I0127 13:24:30.025522  529516 retry.go:31] will retry after 319.227707ms: waiting for domain to come up
	I0127 13:24:30.346226  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:30.346775  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | unable to find current IP address of domain default-k8s-diff-port-325510 in network mk-default-k8s-diff-port-325510
	I0127 13:24:30.346809  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | I0127 13:24:30.346747  529516 retry.go:31] will retry after 444.247973ms: waiting for domain to come up
	I0127 13:24:30.793190  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:30.793686  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | unable to find current IP address of domain default-k8s-diff-port-325510 in network mk-default-k8s-diff-port-325510
	I0127 13:24:30.793731  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | I0127 13:24:30.793674  529516 retry.go:31] will retry after 570.759092ms: waiting for domain to come up
	I0127 13:24:31.366234  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:31.366867  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | unable to find current IP address of domain default-k8s-diff-port-325510 in network mk-default-k8s-diff-port-325510
	I0127 13:24:31.366894  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | I0127 13:24:31.366828  529516 retry.go:31] will retry after 666.181677ms: waiting for domain to come up
	I0127 13:24:32.034879  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:32.035631  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | unable to find current IP address of domain default-k8s-diff-port-325510 in network mk-default-k8s-diff-port-325510
	I0127 13:24:32.035660  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | I0127 13:24:32.035596  529516 retry.go:31] will retry after 631.310028ms: waiting for domain to come up
	I0127 13:24:32.668542  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:32.669076  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | unable to find current IP address of domain default-k8s-diff-port-325510 in network mk-default-k8s-diff-port-325510
	I0127 13:24:32.669104  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | I0127 13:24:32.669053  529516 retry.go:31] will retry after 854.359483ms: waiting for domain to come up
	I0127 13:24:33.525151  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:33.525700  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | unable to find current IP address of domain default-k8s-diff-port-325510 in network mk-default-k8s-diff-port-325510
	I0127 13:24:33.525733  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | I0127 13:24:33.525653  529516 retry.go:31] will retry after 1.313338394s: waiting for domain to come up
	I0127 13:24:34.840679  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:34.841252  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | unable to find current IP address of domain default-k8s-diff-port-325510 in network mk-default-k8s-diff-port-325510
	I0127 13:24:34.841279  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | I0127 13:24:34.841205  529516 retry.go:31] will retry after 1.753959637s: waiting for domain to come up
	I0127 13:24:36.597016  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:36.597575  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | unable to find current IP address of domain default-k8s-diff-port-325510 in network mk-default-k8s-diff-port-325510
	I0127 13:24:36.597602  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | I0127 13:24:36.597558  529516 retry.go:31] will retry after 2.104624583s: waiting for domain to come up
	I0127 13:24:38.703447  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:38.704039  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | unable to find current IP address of domain default-k8s-diff-port-325510 in network mk-default-k8s-diff-port-325510
	I0127 13:24:38.704069  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | I0127 13:24:38.703980  529516 retry.go:31] will retry after 2.827819454s: waiting for domain to come up
	I0127 13:24:41.533151  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:41.533654  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | unable to find current IP address of domain default-k8s-diff-port-325510 in network mk-default-k8s-diff-port-325510
	I0127 13:24:41.533709  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | I0127 13:24:41.533612  529516 retry.go:31] will retry after 2.945860982s: waiting for domain to come up
	I0127 13:24:44.481317  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:44.481795  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | unable to find current IP address of domain default-k8s-diff-port-325510 in network mk-default-k8s-diff-port-325510
	I0127 13:24:44.481828  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | I0127 13:24:44.481744  529516 retry.go:31] will retry after 4.241582382s: waiting for domain to come up
	I0127 13:24:48.725564  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:48.726136  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) found domain IP: 192.168.61.7
	I0127 13:24:48.726181  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has current primary IP address 192.168.61.7 and MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:48.726188  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) reserving static IP address...
	I0127 13:24:48.726590  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-325510", mac: "52:54:00:c4:f9:6c", ip: "192.168.61.7"} in network mk-default-k8s-diff-port-325510: {Iface:virbr3 ExpiryTime:2025-01-27 14:24:40 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6c Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-325510 Clientid:01:52:54:00:c4:f9:6c}
	I0127 13:24:48.726616  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) reserved static IP address 192.168.61.7 for domain default-k8s-diff-port-325510
	I0127 13:24:48.726647  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | skip adding static IP to network mk-default-k8s-diff-port-325510 - found existing host DHCP lease matching {name: "default-k8s-diff-port-325510", mac: "52:54:00:c4:f9:6c", ip: "192.168.61.7"}
	I0127 13:24:48.726661  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) waiting for SSH...
	I0127 13:24:48.726672  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | Getting to WaitForSSH function...
	I0127 13:24:48.729240  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:48.729624  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6c", ip: ""} in network mk-default-k8s-diff-port-325510: {Iface:virbr3 ExpiryTime:2025-01-27 14:24:40 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6c Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-325510 Clientid:01:52:54:00:c4:f9:6c}
	I0127 13:24:48.729681  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined IP address 192.168.61.7 and MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:48.729809  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | Using SSH client type: external
	I0127 13:24:48.729838  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | Using SSH private key: /home/jenkins/minikube-integration/20317-466901/.minikube/machines/default-k8s-diff-port-325510/id_rsa (-rw-------)
	I0127 13:24:48.729909  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20317-466901/.minikube/machines/default-k8s-diff-port-325510/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 13:24:48.729936  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | About to run SSH command:
	I0127 13:24:48.729954  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | exit 0
	I0127 13:24:48.855737  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | SSH cmd err, output: <nil>: 
	I0127 13:24:48.856124  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetConfigRaw
	I0127 13:24:48.856748  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetIP
	I0127 13:24:48.859205  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:48.859587  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6c", ip: ""} in network mk-default-k8s-diff-port-325510: {Iface:virbr3 ExpiryTime:2025-01-27 14:24:40 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6c Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-325510 Clientid:01:52:54:00:c4:f9:6c}
	I0127 13:24:48.859621  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined IP address 192.168.61.7 and MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:48.859852  529417 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/default-k8s-diff-port-325510/config.json ...
	I0127 13:24:48.860093  529417 machine.go:93] provisionDockerMachine start ...
	I0127 13:24:48.860131  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .DriverName
	I0127 13:24:48.860340  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHHostname
	I0127 13:24:48.862775  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:48.863110  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6c", ip: ""} in network mk-default-k8s-diff-port-325510: {Iface:virbr3 ExpiryTime:2025-01-27 14:24:40 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6c Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-325510 Clientid:01:52:54:00:c4:f9:6c}
	I0127 13:24:48.863141  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined IP address 192.168.61.7 and MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:48.863246  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHPort
	I0127 13:24:48.863452  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHKeyPath
	I0127 13:24:48.863620  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHKeyPath
	I0127 13:24:48.863829  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHUsername
	I0127 13:24:48.864000  529417 main.go:141] libmachine: Using SSH client type: native
	I0127 13:24:48.864204  529417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0127 13:24:48.864215  529417 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 13:24:48.971869  529417 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 13:24:48.971900  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetMachineName
	I0127 13:24:48.972206  529417 buildroot.go:166] provisioning hostname "default-k8s-diff-port-325510"
	I0127 13:24:48.972242  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetMachineName
	I0127 13:24:48.972470  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHHostname
	I0127 13:24:48.975584  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:48.975975  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6c", ip: ""} in network mk-default-k8s-diff-port-325510: {Iface:virbr3 ExpiryTime:2025-01-27 14:24:40 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6c Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-325510 Clientid:01:52:54:00:c4:f9:6c}
	I0127 13:24:48.976019  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined IP address 192.168.61.7 and MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:48.976207  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHPort
	I0127 13:24:48.976451  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHKeyPath
	I0127 13:24:48.976624  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHKeyPath
	I0127 13:24:48.976742  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHUsername
	I0127 13:24:48.976915  529417 main.go:141] libmachine: Using SSH client type: native
	I0127 13:24:48.977177  529417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0127 13:24:48.977197  529417 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-325510 && echo "default-k8s-diff-port-325510" | sudo tee /etc/hostname
	I0127 13:24:49.097798  529417 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-325510
	
	I0127 13:24:49.097850  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHHostname
	I0127 13:24:49.101024  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:49.101448  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6c", ip: ""} in network mk-default-k8s-diff-port-325510: {Iface:virbr3 ExpiryTime:2025-01-27 14:24:40 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6c Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-325510 Clientid:01:52:54:00:c4:f9:6c}
	I0127 13:24:49.101481  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined IP address 192.168.61.7 and MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:49.101647  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHPort
	I0127 13:24:49.101845  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHKeyPath
	I0127 13:24:49.101974  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHKeyPath
	I0127 13:24:49.102134  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHUsername
	I0127 13:24:49.102281  529417 main.go:141] libmachine: Using SSH client type: native
	I0127 13:24:49.102587  529417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0127 13:24:49.102618  529417 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-325510' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-325510/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-325510' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 13:24:49.220700  529417 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 13:24:49.220759  529417 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20317-466901/.minikube CaCertPath:/home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20317-466901/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20317-466901/.minikube}
	I0127 13:24:49.220788  529417 buildroot.go:174] setting up certificates
	I0127 13:24:49.220804  529417 provision.go:84] configureAuth start
	I0127 13:24:49.220823  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetMachineName
	I0127 13:24:49.221170  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetIP
	I0127 13:24:49.224273  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:49.224681  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6c", ip: ""} in network mk-default-k8s-diff-port-325510: {Iface:virbr3 ExpiryTime:2025-01-27 14:24:40 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6c Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-325510 Clientid:01:52:54:00:c4:f9:6c}
	I0127 13:24:49.224717  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined IP address 192.168.61.7 and MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:49.224928  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHHostname
	I0127 13:24:49.227289  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:49.227587  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6c", ip: ""} in network mk-default-k8s-diff-port-325510: {Iface:virbr3 ExpiryTime:2025-01-27 14:24:40 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6c Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-325510 Clientid:01:52:54:00:c4:f9:6c}
	I0127 13:24:49.227626  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined IP address 192.168.61.7 and MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:49.227787  529417 provision.go:143] copyHostCerts
	I0127 13:24:49.227852  529417 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-466901/.minikube/cert.pem, removing ...
	I0127 13:24:49.227864  529417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-466901/.minikube/cert.pem
	I0127 13:24:49.227922  529417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20317-466901/.minikube/cert.pem (1123 bytes)
	I0127 13:24:49.228016  529417 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-466901/.minikube/key.pem, removing ...
	I0127 13:24:49.228034  529417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-466901/.minikube/key.pem
	I0127 13:24:49.228057  529417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20317-466901/.minikube/key.pem (1675 bytes)
	I0127 13:24:49.228115  529417 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-466901/.minikube/ca.pem, removing ...
	I0127 13:24:49.228122  529417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-466901/.minikube/ca.pem
	I0127 13:24:49.228139  529417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20317-466901/.minikube/ca.pem (1082 bytes)
	I0127 13:24:49.228185  529417 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20317-466901/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-325510 san=[127.0.0.1 192.168.61.7 default-k8s-diff-port-325510 localhost minikube]
	I0127 13:24:49.597262  529417 provision.go:177] copyRemoteCerts
	I0127 13:24:49.597335  529417 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 13:24:49.597365  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHHostname
	I0127 13:24:49.600086  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:49.600430  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6c", ip: ""} in network mk-default-k8s-diff-port-325510: {Iface:virbr3 ExpiryTime:2025-01-27 14:24:40 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6c Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-325510 Clientid:01:52:54:00:c4:f9:6c}
	I0127 13:24:49.600466  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined IP address 192.168.61.7 and MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:49.600636  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHPort
	I0127 13:24:49.600897  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHKeyPath
	I0127 13:24:49.601062  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHUsername
	I0127 13:24:49.601192  529417 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/default-k8s-diff-port-325510/id_rsa Username:docker}
	I0127 13:24:49.689127  529417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0127 13:24:49.718741  529417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0127 13:24:49.745565  529417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 13:24:49.772604  529417 provision.go:87] duration metric: took 551.776703ms to configureAuth
	I0127 13:24:49.772646  529417 buildroot.go:189] setting minikube options for container-runtime
	I0127 13:24:49.772900  529417 config.go:182] Loaded profile config "default-k8s-diff-port-325510": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 13:24:49.772928  529417 machine.go:96] duration metric: took 912.814676ms to provisionDockerMachine
	I0127 13:24:49.772942  529417 start.go:293] postStartSetup for "default-k8s-diff-port-325510" (driver="kvm2")
	I0127 13:24:49.772957  529417 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 13:24:49.772992  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .DriverName
	I0127 13:24:49.773364  529417 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 13:24:49.773405  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHHostname
	I0127 13:24:49.776221  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:49.776646  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6c", ip: ""} in network mk-default-k8s-diff-port-325510: {Iface:virbr3 ExpiryTime:2025-01-27 14:24:40 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6c Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-325510 Clientid:01:52:54:00:c4:f9:6c}
	I0127 13:24:49.776689  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined IP address 192.168.61.7 and MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:49.776928  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHPort
	I0127 13:24:49.777197  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHKeyPath
	I0127 13:24:49.777392  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHUsername
	I0127 13:24:49.777573  529417 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/default-k8s-diff-port-325510/id_rsa Username:docker}
	I0127 13:24:49.862730  529417 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 13:24:49.867684  529417 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 13:24:49.867720  529417 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-466901/.minikube/addons for local assets ...
	I0127 13:24:49.867805  529417 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-466901/.minikube/files for local assets ...
	I0127 13:24:49.867916  529417 filesync.go:149] local asset: /home/jenkins/minikube-integration/20317-466901/.minikube/files/etc/ssl/certs/4742752.pem -> 4742752.pem in /etc/ssl/certs
	I0127 13:24:49.868021  529417 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 13:24:49.878556  529417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/files/etc/ssl/certs/4742752.pem --> /etc/ssl/certs/4742752.pem (1708 bytes)
	I0127 13:24:49.904971  529417 start.go:296] duration metric: took 132.009663ms for postStartSetup
	I0127 13:24:49.905024  529417 fix.go:56] duration metric: took 21.500293789s for fixHost
	I0127 13:24:49.905052  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHHostname
	I0127 13:24:49.907903  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:49.908296  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6c", ip: ""} in network mk-default-k8s-diff-port-325510: {Iface:virbr3 ExpiryTime:2025-01-27 14:24:40 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6c Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-325510 Clientid:01:52:54:00:c4:f9:6c}
	I0127 13:24:49.908335  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined IP address 192.168.61.7 and MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:49.908541  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHPort
	I0127 13:24:49.908779  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHKeyPath
	I0127 13:24:49.908954  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHKeyPath
	I0127 13:24:49.909136  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHUsername
	I0127 13:24:49.909338  529417 main.go:141] libmachine: Using SSH client type: native
	I0127 13:24:49.909570  529417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0127 13:24:49.909588  529417 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 13:24:50.020394  529417 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737984290.001658139
	
	I0127 13:24:50.020424  529417 fix.go:216] guest clock: 1737984290.001658139
	I0127 13:24:50.020435  529417 fix.go:229] Guest: 2025-01-27 13:24:50.001658139 +0000 UTC Remote: 2025-01-27 13:24:49.905029759 +0000 UTC m=+36.666841763 (delta=96.62838ms)
	I0127 13:24:50.020475  529417 fix.go:200] guest clock delta is within tolerance: 96.62838ms
	I0127 13:24:50.020484  529417 start.go:83] releasing machines lock for "default-k8s-diff-port-325510", held for 21.615784521s
	I0127 13:24:50.020522  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .DriverName
	I0127 13:24:50.020812  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetIP
	I0127 13:24:50.023795  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:50.024122  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6c", ip: ""} in network mk-default-k8s-diff-port-325510: {Iface:virbr3 ExpiryTime:2025-01-27 14:24:40 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6c Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-325510 Clientid:01:52:54:00:c4:f9:6c}
	I0127 13:24:50.024153  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined IP address 192.168.61.7 and MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:50.024377  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .DriverName
	I0127 13:24:50.025028  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .DriverName
	I0127 13:24:50.025221  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .DriverName
	I0127 13:24:50.025320  529417 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 13:24:50.025387  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHHostname
	I0127 13:24:50.025448  529417 ssh_runner.go:195] Run: cat /version.json
	I0127 13:24:50.025485  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHHostname
	I0127 13:24:50.028316  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:50.028553  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:50.028703  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6c", ip: ""} in network mk-default-k8s-diff-port-325510: {Iface:virbr3 ExpiryTime:2025-01-27 14:24:40 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6c Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-325510 Clientid:01:52:54:00:c4:f9:6c}
	I0127 13:24:50.028741  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined IP address 192.168.61.7 and MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:50.028902  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHPort
	I0127 13:24:50.028987  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6c", ip: ""} in network mk-default-k8s-diff-port-325510: {Iface:virbr3 ExpiryTime:2025-01-27 14:24:40 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6c Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-325510 Clientid:01:52:54:00:c4:f9:6c}
	I0127 13:24:50.029020  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined IP address 192.168.61.7 and MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:50.029103  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHKeyPath
	I0127 13:24:50.029256  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHUsername
	I0127 13:24:50.029277  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHPort
	I0127 13:24:50.029463  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHKeyPath
	I0127 13:24:50.029464  529417 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/default-k8s-diff-port-325510/id_rsa Username:docker}
	I0127 13:24:50.029630  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHUsername
	I0127 13:24:50.029754  529417 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/default-k8s-diff-port-325510/id_rsa Username:docker}
	I0127 13:24:50.117004  529417 ssh_runner.go:195] Run: systemctl --version
	I0127 13:24:50.137850  529417 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 13:24:50.144598  529417 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 13:24:50.144685  529417 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 13:24:50.161332  529417 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 13:24:50.161367  529417 start.go:495] detecting cgroup driver to use...
	I0127 13:24:50.161443  529417 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 13:24:50.196559  529417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 13:24:50.213633  529417 docker.go:217] disabling cri-docker service (if available) ...
	I0127 13:24:50.213699  529417 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 13:24:50.231542  529417 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 13:24:50.247520  529417 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 13:24:50.372935  529417 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 13:24:50.536833  529417 docker.go:233] disabling docker service ...
	I0127 13:24:50.536908  529417 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 13:24:50.554952  529417 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 13:24:50.569134  529417 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 13:24:50.739902  529417 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 13:24:50.892857  529417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 13:24:50.912330  529417 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 13:24:50.938793  529417 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 13:24:50.950749  529417 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 13:24:50.963564  529417 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 13:24:50.963635  529417 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 13:24:50.975859  529417 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 13:24:50.990589  529417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 13:24:51.005613  529417 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 13:24:51.018217  529417 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 13:24:51.030203  529417 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 13:24:51.041119  529417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 13:24:51.051910  529417 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 13:24:51.063174  529417 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 13:24:51.073479  529417 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 13:24:51.073557  529417 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 13:24:51.088700  529417 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 13:24:51.099786  529417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:24:51.240531  529417 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 13:24:51.274670  529417 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0127 13:24:51.274763  529417 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 13:24:51.279810  529417 retry.go:31] will retry after 1.446242597s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0127 13:24:52.727396  529417 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 13:24:52.734392  529417 start.go:563] Will wait 60s for crictl version
	I0127 13:24:52.734470  529417 ssh_runner.go:195] Run: which crictl
	I0127 13:24:52.738933  529417 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 13:24:52.794147  529417 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0127 13:24:52.794239  529417 ssh_runner.go:195] Run: containerd --version
	I0127 13:24:52.821917  529417 ssh_runner.go:195] Run: containerd --version
	I0127 13:24:52.853353  529417 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	I0127 13:24:52.854839  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetIP
	I0127 13:24:52.858283  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:52.858673  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6c", ip: ""} in network mk-default-k8s-diff-port-325510: {Iface:virbr3 ExpiryTime:2025-01-27 14:24:40 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6c Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-325510 Clientid:01:52:54:00:c4:f9:6c}
	I0127 13:24:52.858715  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined IP address 192.168.61.7 and MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:24:52.859066  529417 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0127 13:24:52.864156  529417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:24:52.883140  529417 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-325510 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-325
510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 13:24:52.883316  529417 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 13:24:52.883404  529417 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:24:52.924106  529417 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 13:24:52.924143  529417 containerd.go:534] Images already preloaded, skipping extraction
	I0127 13:24:52.924204  529417 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:24:52.962163  529417 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 13:24:52.962190  529417 cache_images.go:84] Images are preloaded, skipping loading
	I0127 13:24:52.962198  529417 kubeadm.go:934] updating node { 192.168.61.7 8444 v1.32.1 containerd true true} ...
	I0127 13:24:52.962336  529417 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-325510 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-325510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 13:24:52.962410  529417 ssh_runner.go:195] Run: sudo crictl info
	I0127 13:24:53.001424  529417 cni.go:84] Creating CNI manager for ""
	I0127 13:24:53.001464  529417 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 13:24:53.001478  529417 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 13:24:53.001500  529417 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.7 APIServerPort:8444 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-325510 NodeName:default-k8s-diff-port-325510 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 13:24:53.001661  529417 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.7
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-325510"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.7"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.7"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 13:24:53.001796  529417 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 13:24:53.014013  529417 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 13:24:53.014113  529417 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 13:24:53.025778  529417 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I0127 13:24:53.045964  529417 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 13:24:53.068690  529417 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2318 bytes)
	I0127 13:24:53.092279  529417 ssh_runner.go:195] Run: grep 192.168.61.7	control-plane.minikube.internal$ /etc/hosts
	I0127 13:24:53.096896  529417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:24:53.115441  529417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:24:53.273946  529417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:24:53.306073  529417 certs.go:68] Setting up /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/default-k8s-diff-port-325510 for IP: 192.168.61.7
	I0127 13:24:53.306108  529417 certs.go:194] generating shared ca certs ...
	I0127 13:24:53.306132  529417 certs.go:226] acquiring lock for ca certs: {Name:mk60f2aac78eb363c5e06a00675357d94c0df88d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:24:53.306336  529417 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20317-466901/.minikube/ca.key
	I0127 13:24:53.306404  529417 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20317-466901/.minikube/proxy-client-ca.key
	I0127 13:24:53.306419  529417 certs.go:256] generating profile certs ...
	I0127 13:24:53.306554  529417 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/default-k8s-diff-port-325510/client.key
	I0127 13:24:53.306670  529417 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/default-k8s-diff-port-325510/apiserver.key.199a12d1
	I0127 13:24:53.306736  529417 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/default-k8s-diff-port-325510/proxy-client.key
	I0127 13:24:53.306901  529417 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/474275.pem (1338 bytes)
	W0127 13:24:53.306944  529417 certs.go:480] ignoring /home/jenkins/minikube-integration/20317-466901/.minikube/certs/474275_empty.pem, impossibly tiny 0 bytes
	I0127 13:24:53.306959  529417 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 13:24:53.306999  529417 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem (1082 bytes)
	I0127 13:24:53.307035  529417 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/cert.pem (1123 bytes)
	I0127 13:24:53.307063  529417 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/key.pem (1675 bytes)
	I0127 13:24:53.307122  529417 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/files/etc/ssl/certs/4742752.pem (1708 bytes)
	I0127 13:24:53.308067  529417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 13:24:53.346469  529417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 13:24:53.385552  529417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 13:24:53.423146  529417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 13:24:53.458850  529417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/default-k8s-diff-port-325510/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0127 13:24:53.492493  529417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/default-k8s-diff-port-325510/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 13:24:53.531635  529417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/default-k8s-diff-port-325510/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 13:24:53.575184  529417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/default-k8s-diff-port-325510/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 13:24:53.611295  529417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 13:24:53.647118  529417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/certs/474275.pem --> /usr/share/ca-certificates/474275.pem (1338 bytes)
	I0127 13:24:53.675997  529417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/files/etc/ssl/certs/4742752.pem --> /usr/share/ca-certificates/4742752.pem (1708 bytes)
	I0127 13:24:53.708164  529417 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 13:24:53.727285  529417 ssh_runner.go:195] Run: openssl version
	I0127 13:24:53.733905  529417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 13:24:53.747943  529417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:24:53.753785  529417 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 12:10 /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:24:53.753870  529417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:24:53.760292  529417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 13:24:53.773408  529417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/474275.pem && ln -fs /usr/share/ca-certificates/474275.pem /etc/ssl/certs/474275.pem"
	I0127 13:24:53.786208  529417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/474275.pem
	I0127 13:24:53.791299  529417 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 12:18 /usr/share/ca-certificates/474275.pem
	I0127 13:24:53.791381  529417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/474275.pem
	I0127 13:24:53.797805  529417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/474275.pem /etc/ssl/certs/51391683.0"
	I0127 13:24:53.810377  529417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4742752.pem && ln -fs /usr/share/ca-certificates/4742752.pem /etc/ssl/certs/4742752.pem"
	I0127 13:24:53.823412  529417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4742752.pem
	I0127 13:24:53.828814  529417 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 12:18 /usr/share/ca-certificates/4742752.pem
	I0127 13:24:53.828887  529417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4742752.pem
	I0127 13:24:53.835471  529417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4742752.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 13:24:53.849265  529417 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 13:24:53.854537  529417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 13:24:53.861164  529417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 13:24:53.868147  529417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 13:24:53.875042  529417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 13:24:53.881674  529417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 13:24:53.888303  529417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 13:24:53.894827  529417 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-325510 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-325510
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:24:53.894954  529417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0127 13:24:53.895016  529417 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:24:53.936503  529417 cri.go:89] found id: "949c426f031848a20c4e6fd8e05c7f736119a31d37c44d2a6df0c1ef9f251356"
	I0127 13:24:53.936532  529417 cri.go:89] found id: "1dc53bf9e12e7cae26e763b6a9beca1b58449566ddf1c4c2c5ada0a761d2c796"
	I0127 13:24:53.936539  529417 cri.go:89] found id: "1cef4f6247ffd103c94d0026ec9372adf09641aa8c3bbbb7010269af60cf841c"
	I0127 13:24:53.936549  529417 cri.go:89] found id: "b55c90a01ee33a515a953f034338c5459fb13ed26e5d9ea3cf1b3d41d42a4fc5"
	I0127 13:24:53.936553  529417 cri.go:89] found id: "237d9c7947a7132fdde51179d45bd6de416f5e4f071e9f242cacd99b8bea5ea2"
	I0127 13:24:53.936557  529417 cri.go:89] found id: "fe9640e793b4a3b90e786b236c9f5a25a51c267c34916a08b849f7deb67d0df0"
	I0127 13:24:53.936560  529417 cri.go:89] found id: "75dd8b45a4dd39152220f16ad338cb8d88697a679e315508330e810a983e5803"
	I0127 13:24:53.936564  529417 cri.go:89] found id: ""
	I0127 13:24:53.936626  529417 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0127 13:24:53.952707  529417 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-27T13:24:53Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0127 13:24:53.952790  529417 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 13:24:53.964055  529417 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 13:24:53.964081  529417 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 13:24:53.964144  529417 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 13:24:53.974789  529417 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 13:24:53.975603  529417 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-325510" does not appear in /home/jenkins/minikube-integration/20317-466901/kubeconfig
	I0127 13:24:53.975933  529417 kubeconfig.go:62] /home/jenkins/minikube-integration/20317-466901/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-325510" cluster setting kubeconfig missing "default-k8s-diff-port-325510" context setting]
	I0127 13:24:53.976563  529417 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-466901/kubeconfig: {Name:mkc116eec378af43ea8fefe45e11af3e19be85bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:24:53.977892  529417 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 13:24:53.988640  529417 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.7
	I0127 13:24:53.988678  529417 kubeadm.go:1160] stopping kube-system containers ...
	I0127 13:24:53.988695  529417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0127 13:24:53.988754  529417 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:24:54.026418  529417 cri.go:89] found id: "949c426f031848a20c4e6fd8e05c7f736119a31d37c44d2a6df0c1ef9f251356"
	I0127 13:24:54.026453  529417 cri.go:89] found id: "1dc53bf9e12e7cae26e763b6a9beca1b58449566ddf1c4c2c5ada0a761d2c796"
	I0127 13:24:54.026459  529417 cri.go:89] found id: "1cef4f6247ffd103c94d0026ec9372adf09641aa8c3bbbb7010269af60cf841c"
	I0127 13:24:54.026464  529417 cri.go:89] found id: "b55c90a01ee33a515a953f034338c5459fb13ed26e5d9ea3cf1b3d41d42a4fc5"
	I0127 13:24:54.026469  529417 cri.go:89] found id: "237d9c7947a7132fdde51179d45bd6de416f5e4f071e9f242cacd99b8bea5ea2"
	I0127 13:24:54.026472  529417 cri.go:89] found id: "fe9640e793b4a3b90e786b236c9f5a25a51c267c34916a08b849f7deb67d0df0"
	I0127 13:24:54.026475  529417 cri.go:89] found id: "75dd8b45a4dd39152220f16ad338cb8d88697a679e315508330e810a983e5803"
	I0127 13:24:54.026477  529417 cri.go:89] found id: ""
	I0127 13:24:54.026482  529417 cri.go:252] Stopping containers: [949c426f031848a20c4e6fd8e05c7f736119a31d37c44d2a6df0c1ef9f251356 1dc53bf9e12e7cae26e763b6a9beca1b58449566ddf1c4c2c5ada0a761d2c796 1cef4f6247ffd103c94d0026ec9372adf09641aa8c3bbbb7010269af60cf841c b55c90a01ee33a515a953f034338c5459fb13ed26e5d9ea3cf1b3d41d42a4fc5 237d9c7947a7132fdde51179d45bd6de416f5e4f071e9f242cacd99b8bea5ea2 fe9640e793b4a3b90e786b236c9f5a25a51c267c34916a08b849f7deb67d0df0 75dd8b45a4dd39152220f16ad338cb8d88697a679e315508330e810a983e5803]
	I0127 13:24:54.026541  529417 ssh_runner.go:195] Run: which crictl
	I0127 13:24:54.031123  529417 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 949c426f031848a20c4e6fd8e05c7f736119a31d37c44d2a6df0c1ef9f251356 1dc53bf9e12e7cae26e763b6a9beca1b58449566ddf1c4c2c5ada0a761d2c796 1cef4f6247ffd103c94d0026ec9372adf09641aa8c3bbbb7010269af60cf841c b55c90a01ee33a515a953f034338c5459fb13ed26e5d9ea3cf1b3d41d42a4fc5 237d9c7947a7132fdde51179d45bd6de416f5e4f071e9f242cacd99b8bea5ea2 fe9640e793b4a3b90e786b236c9f5a25a51c267c34916a08b849f7deb67d0df0 75dd8b45a4dd39152220f16ad338cb8d88697a679e315508330e810a983e5803
	I0127 13:24:54.069855  529417 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 13:24:54.087489  529417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:24:54.098083  529417 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:24:54.098104  529417 kubeadm.go:157] found existing configuration files:
	
	I0127 13:24:54.098164  529417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0127 13:24:54.108731  529417 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:24:54.108808  529417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:24:54.119477  529417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0127 13:24:54.129267  529417 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:24:54.129332  529417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:24:54.139297  529417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0127 13:24:54.148879  529417 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:24:54.148961  529417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:24:54.158947  529417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0127 13:24:54.168549  529417 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:24:54.168613  529417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:24:54.179350  529417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 13:24:54.192573  529417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:24:54.326291  529417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:24:55.187158  529417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:24:55.404194  529417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:24:55.486010  529417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:24:55.588760  529417 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:24:55.588867  529417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:24:56.089349  529417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:24:56.589301  529417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:24:56.616985  529417 api_server.go:72] duration metric: took 1.028225016s to wait for apiserver process to appear ...
	I0127 13:24:56.617024  529417 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:24:56.617050  529417 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0127 13:24:56.617573  529417 api_server.go:269] stopped: https://192.168.61.7:8444/healthz: Get "https://192.168.61.7:8444/healthz": dial tcp 192.168.61.7:8444: connect: connection refused
	I0127 13:24:57.117226  529417 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0127 13:24:59.535495  529417 api_server.go:279] https://192.168.61.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 13:24:59.535544  529417 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 13:24:59.535564  529417 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0127 13:24:59.625890  529417 api_server.go:279] https://192.168.61.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 13:24:59.625929  529417 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 13:24:59.625959  529417 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0127 13:24:59.652425  529417 api_server.go:279] https://192.168.61.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 13:24:59.652520  529417 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 13:25:00.117151  529417 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0127 13:25:00.127786  529417 api_server.go:279] https://192.168.61.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:25:00.127832  529417 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:25:00.617505  529417 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0127 13:25:00.624004  529417 api_server.go:279] https://192.168.61.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:25:00.624035  529417 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:25:01.117266  529417 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0127 13:25:01.126118  529417 api_server.go:279] https://192.168.61.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:25:01.126155  529417 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:25:01.617786  529417 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0127 13:25:01.627371  529417 api_server.go:279] https://192.168.61.7:8444/healthz returned 200:
	ok
	I0127 13:25:01.635474  529417 api_server.go:141] control plane version: v1.32.1
	I0127 13:25:01.635516  529417 api_server.go:131] duration metric: took 5.018482784s to wait for apiserver health ...
	I0127 13:25:01.635529  529417 cni.go:84] Creating CNI manager for ""
	I0127 13:25:01.635539  529417 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 13:25:01.637473  529417 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 13:25:01.638774  529417 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 13:25:01.658689  529417 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 13:25:01.696836  529417 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:25:01.716214  529417 system_pods.go:59] 8 kube-system pods found
	I0127 13:25:01.716273  529417 system_pods.go:61] "coredns-668d6bf9bc-r5wcc" [54f917ce-81fa-400c-8046-220c3a6657ff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 13:25:01.716288  529417 system_pods.go:61] "etcd-default-k8s-diff-port-325510" [4fc959de-d3bd-4a79-88e2-31ac0dc91765] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 13:25:01.716300  529417 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-325510" [32fe19d4-bc6c-488a-a29f-3df526e18382] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 13:25:01.716310  529417 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-325510" [73f8ec09-665d-40d0-a85d-792d1f3446dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 13:25:01.716331  529417 system_pods.go:61] "kube-proxy-r682b" [ed01f7d5-6ab9-4be5-a1df-5cb51457b006] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0127 13:25:01.716352  529417 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-325510" [aa91ea85-0b69-4a07-97ee-5e5f04e97810] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 13:25:01.716365  529417 system_pods.go:61] "metrics-server-f79f97bbb-l56jp" [409b1195-500e-4b4d-85b9-2e9ef984b06f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 13:25:01.716373  529417 system_pods.go:61] "storage-provisioner" [62825ac9-c0b9-4e11-9a2c-3f171a2f869f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 13:25:01.716381  529417 system_pods.go:74] duration metric: took 19.517035ms to wait for pod list to return data ...
	I0127 13:25:01.716395  529417 node_conditions.go:102] verifying NodePressure condition ...
	I0127 13:25:01.731288  529417 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 13:25:01.731328  529417 node_conditions.go:123] node cpu capacity is 2
	I0127 13:25:01.731346  529417 node_conditions.go:105] duration metric: took 14.945078ms to run NodePressure ...
	I0127 13:25:01.731384  529417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:25:02.084442  529417 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0127 13:25:02.089368  529417 kubeadm.go:739] kubelet initialised
	I0127 13:25:02.089395  529417 kubeadm.go:740] duration metric: took 4.920864ms waiting for restarted kubelet to initialise ...
	I0127 13:25:02.089404  529417 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:25:02.095806  529417 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-r5wcc" in "kube-system" namespace to be "Ready" ...
	I0127 13:25:04.105210  529417 pod_ready.go:103] pod "coredns-668d6bf9bc-r5wcc" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:06.603300  529417 pod_ready.go:103] pod "coredns-668d6bf9bc-r5wcc" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:08.613823  529417 pod_ready.go:103] pod "coredns-668d6bf9bc-r5wcc" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:11.105731  529417 pod_ready.go:103] pod "coredns-668d6bf9bc-r5wcc" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:13.107535  529417 pod_ready.go:93] pod "coredns-668d6bf9bc-r5wcc" in "kube-system" namespace has status "Ready":"True"
	I0127 13:25:13.107565  529417 pod_ready.go:82] duration metric: took 11.011728081s for pod "coredns-668d6bf9bc-r5wcc" in "kube-system" namespace to be "Ready" ...
	I0127 13:25:13.107577  529417 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:25:14.116132  529417 pod_ready.go:93] pod "etcd-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"True"
	I0127 13:25:14.116157  529417 pod_ready.go:82] duration metric: took 1.008572248s for pod "etcd-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:25:14.116172  529417 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:25:14.122703  529417 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"True"
	I0127 13:25:14.122728  529417 pod_ready.go:82] duration metric: took 6.547593ms for pod "kube-apiserver-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:25:14.122738  529417 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:25:14.129435  529417 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"True"
	I0127 13:25:14.129464  529417 pod_ready.go:82] duration metric: took 6.717862ms for pod "kube-controller-manager-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:25:14.129480  529417 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-r682b" in "kube-system" namespace to be "Ready" ...
	I0127 13:25:14.134042  529417 pod_ready.go:93] pod "kube-proxy-r682b" in "kube-system" namespace has status "Ready":"True"
	I0127 13:25:14.134061  529417 pod_ready.go:82] duration metric: took 4.573719ms for pod "kube-proxy-r682b" in "kube-system" namespace to be "Ready" ...
	I0127 13:25:14.134070  529417 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:25:14.300062  529417 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"True"
	I0127 13:25:14.300094  529417 pod_ready.go:82] duration metric: took 166.017413ms for pod "kube-scheduler-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:25:14.300105  529417 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace to be "Ready" ...
	I0127 13:25:16.308623  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:18.856632  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:21.306407  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:23.810687  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:26.307742  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:28.309036  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:30.808349  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:32.809483  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:35.307188  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:37.308927  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:39.808423  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:42.307486  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:44.312332  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:46.807412  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:48.809451  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:51.305733  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:53.308323  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:55.806654  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:57.807168  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:25:59.812137  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:02.306910  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:04.307325  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:06.307463  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:08.808018  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:11.308059  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:13.807328  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:16.306749  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:18.307940  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:20.308258  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:22.308644  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:24.806830  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:27.306602  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:29.307016  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:31.806363  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:33.807548  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:35.808164  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:38.306742  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:40.307122  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:42.807515  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:45.307193  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:47.307278  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:49.806932  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:52.311634  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:54.806963  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:56.807253  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:26:58.807315  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:01.307358  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:03.307514  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:05.307555  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:07.307758  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:09.806666  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:11.807944  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:13.809110  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:16.311433  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:18.808503  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:21.306884  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:23.307356  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:25.806839  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:27.807367  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:30.306346  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:32.307183  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:34.307937  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:36.807869  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:38.808145  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:41.308367  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:43.806640  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:45.807493  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:47.807788  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:49.809098  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:52.308590  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:54.808424  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:56.808528  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:27:59.307491  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:01.809727  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:04.309828  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:06.808456  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:09.306294  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:11.306816  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:13.806713  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:16.307799  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:18.308159  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:20.314185  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:22.808208  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:24.809601  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:27.306517  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:29.308490  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:31.808694  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:34.311021  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:36.809246  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:39.308477  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:41.811363  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:44.308127  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:46.308440  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:48.808277  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:51.307875  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:53.806392  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:55.806518  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:57.808012  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:59.808090  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:01.808480  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:03.808549  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:05.809379  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:08.309241  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:10.806393  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:12.808038  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:14.300254  529417 pod_ready.go:82] duration metric: took 4m0.000130065s for pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace to be "Ready" ...
	E0127 13:29:14.300291  529417 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0127 13:29:14.300324  529417 pod_ready.go:39] duration metric: took 4m12.210910321s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:29:14.300355  529417 kubeadm.go:597] duration metric: took 4m20.336267253s to restartPrimaryControlPlane
	W0127 13:29:14.300420  529417 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 13:29:14.300449  529417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 13:29:16.335301  529417 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.034816955s)
	I0127 13:29:16.335395  529417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:29:16.352998  529417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 13:29:16.365092  529417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:29:16.378733  529417 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:29:16.378758  529417 kubeadm.go:157] found existing configuration files:
	
	I0127 13:29:16.378804  529417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0127 13:29:16.395924  529417 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:29:16.396005  529417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:29:16.408496  529417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0127 13:29:16.418917  529417 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:29:16.418986  529417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:29:16.429065  529417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0127 13:29:16.439234  529417 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:29:16.439333  529417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:29:16.449865  529417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0127 13:29:16.460738  529417 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:29:16.460831  529417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:29:16.472411  529417 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 13:29:16.642625  529417 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 13:29:25.581414  529417 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 13:29:25.581498  529417 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 13:29:25.581603  529417 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 13:29:25.581744  529417 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 13:29:25.581857  529417 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 13:29:25.581911  529417 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 13:29:25.583668  529417 out.go:235]   - Generating certificates and keys ...
	I0127 13:29:25.583784  529417 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 13:29:25.583864  529417 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 13:29:25.583999  529417 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 13:29:25.584094  529417 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 13:29:25.584212  529417 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 13:29:25.584290  529417 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 13:29:25.584368  529417 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 13:29:25.584490  529417 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 13:29:25.584607  529417 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 13:29:25.584736  529417 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 13:29:25.584797  529417 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 13:29:25.584859  529417 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 13:29:25.584911  529417 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 13:29:25.584981  529417 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 13:29:25.585070  529417 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 13:29:25.585182  529417 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 13:29:25.585291  529417 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 13:29:25.585425  529417 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 13:29:25.585505  529417 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 13:29:25.587922  529417 out.go:235]   - Booting up control plane ...
	I0127 13:29:25.588008  529417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 13:29:25.588109  529417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 13:29:25.588212  529417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 13:29:25.588306  529417 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 13:29:25.588407  529417 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 13:29:25.588476  529417 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 13:29:25.588653  529417 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 13:29:25.588744  529417 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 13:29:25.588806  529417 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.424535ms
	I0127 13:29:25.588894  529417 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 13:29:25.588947  529417 kubeadm.go:310] [api-check] The API server is healthy after 6.003546574s
	I0127 13:29:25.589042  529417 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 13:29:25.589188  529417 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 13:29:25.589243  529417 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 13:29:25.589423  529417 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-325510 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 13:29:25.589477  529417 kubeadm.go:310] [bootstrap-token] Using token: pmveah.4ebz9u5xjcadsa8l
	I0127 13:29:25.590661  529417 out.go:235]   - Configuring RBAC rules ...
	I0127 13:29:25.590772  529417 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 13:29:25.590884  529417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 13:29:25.591076  529417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 13:29:25.591309  529417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 13:29:25.591477  529417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 13:29:25.591601  529417 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 13:29:25.591734  529417 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 13:29:25.591810  529417 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 13:29:25.591869  529417 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 13:29:25.591879  529417 kubeadm.go:310] 
	I0127 13:29:25.591954  529417 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 13:29:25.591974  529417 kubeadm.go:310] 
	I0127 13:29:25.592097  529417 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 13:29:25.592115  529417 kubeadm.go:310] 
	I0127 13:29:25.592151  529417 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 13:29:25.592237  529417 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 13:29:25.592327  529417 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 13:29:25.592337  529417 kubeadm.go:310] 
	I0127 13:29:25.592390  529417 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 13:29:25.592397  529417 kubeadm.go:310] 
	I0127 13:29:25.592435  529417 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 13:29:25.592439  529417 kubeadm.go:310] 
	I0127 13:29:25.592512  529417 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 13:29:25.592614  529417 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 13:29:25.592674  529417 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 13:29:25.592682  529417 kubeadm.go:310] 
	I0127 13:29:25.592801  529417 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 13:29:25.592928  529417 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 13:29:25.592941  529417 kubeadm.go:310] 
	I0127 13:29:25.593032  529417 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token pmveah.4ebz9u5xjcadsa8l \
	I0127 13:29:25.593158  529417 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:44e7ea386d1f8e7ab1336d835156dd84ecca20069390afc88f04bb1a3c629fd2 \
	I0127 13:29:25.593193  529417 kubeadm.go:310] 	--control-plane 
	I0127 13:29:25.593206  529417 kubeadm.go:310] 
	I0127 13:29:25.593328  529417 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 13:29:25.593347  529417 kubeadm.go:310] 
	I0127 13:29:25.593453  529417 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token pmveah.4ebz9u5xjcadsa8l \
	I0127 13:29:25.593643  529417 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:44e7ea386d1f8e7ab1336d835156dd84ecca20069390afc88f04bb1a3c629fd2 
	I0127 13:29:25.593663  529417 cni.go:84] Creating CNI manager for ""
	I0127 13:29:25.593674  529417 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 13:29:25.595331  529417 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 13:29:25.596457  529417 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 13:29:25.611060  529417 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 13:29:25.631563  529417 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 13:29:25.631668  529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:25.631709  529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-325510 minikube.k8s.io/updated_at=2025_01_27T13_29_25_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=0d71ce9b1959d04f0d9fa7dbc5639a49619ad89b minikube.k8s.io/name=default-k8s-diff-port-325510 minikube.k8s.io/primary=true
	I0127 13:29:25.654141  529417 ops.go:34] apiserver oom_adj: -16
	I0127 13:29:25.885770  529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:26.386140  529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:26.885887  529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:27.386520  529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:27.886746  529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:28.386093  529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:28.523381  529417 kubeadm.go:1113] duration metric: took 2.89179334s to wait for elevateKubeSystemPrivileges
	I0127 13:29:28.523431  529417 kubeadm.go:394] duration metric: took 4m34.628614328s to StartCluster
	I0127 13:29:28.523462  529417 settings.go:142] acquiring lock: {Name:mk070ebf22d35da2704f00750921836dbd2cd121 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:29:28.523566  529417 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20317-466901/kubeconfig
	I0127 13:29:28.526181  529417 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-466901/kubeconfig: {Name:mkc116eec378af43ea8fefe45e11af3e19be85bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:29:28.526636  529417 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 13:29:28.526773  529417 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 13:29:28.526897  529417 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-325510"
	I0127 13:29:28.526920  529417 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-325510"
	I0127 13:29:28.526920  529417 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-325510"
	W0127 13:29:28.526930  529417 addons.go:247] addon storage-provisioner should already be in state true
	I0127 13:29:28.526943  529417 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-325510"
	I0127 13:29:28.526965  529417 host.go:66] Checking if "default-k8s-diff-port-325510" exists ...
	I0127 13:29:28.527036  529417 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-325510"
	I0127 13:29:28.527054  529417 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-325510"
	W0127 13:29:28.527061  529417 addons.go:247] addon dashboard should already be in state true
	I0127 13:29:28.527086  529417 host.go:66] Checking if "default-k8s-diff-port-325510" exists ...
	I0127 13:29:28.527083  529417 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-325510"
	I0127 13:29:28.527117  529417 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-325510"
	W0127 13:29:28.527128  529417 addons.go:247] addon metrics-server should already be in state true
	I0127 13:29:28.527164  529417 host.go:66] Checking if "default-k8s-diff-port-325510" exists ...
	I0127 13:29:28.527436  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:28.527441  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:28.526898  529417 config.go:182] Loaded profile config "default-k8s-diff-port-325510": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 13:29:28.527475  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:28.527490  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:28.527619  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:28.527655  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:28.527667  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:28.527700  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:28.528609  529417 out.go:177] * Verifying Kubernetes components...
	I0127 13:29:28.530189  529417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:29:28.546697  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35953
	I0127 13:29:28.547331  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:28.547485  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43545
	I0127 13:29:28.547528  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46213
	I0127 13:29:28.547893  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:28.548297  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:28.548482  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:29:28.548497  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:28.548832  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:28.549020  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetState
	I0127 13:29:28.549338  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:29:28.549354  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:28.549743  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35541
	I0127 13:29:28.549980  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:28.550227  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:29:28.550241  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:28.550306  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:28.550880  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:28.550926  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:28.551223  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:28.551394  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:29:28.551416  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:28.551971  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:28.552001  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:28.552189  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:28.552980  529417 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-325510"
	W0127 13:29:28.553005  529417 addons.go:247] addon default-storageclass should already be in state true
	I0127 13:29:28.553038  529417 host.go:66] Checking if "default-k8s-diff-port-325510" exists ...
	I0127 13:29:28.553380  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:28.553426  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:28.555977  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:28.556013  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:28.572312  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32967
	I0127 13:29:28.573004  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:28.573598  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:29:28.573617  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:28.573988  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:28.574040  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44851
	I0127 13:29:28.574171  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetState
	I0127 13:29:28.574508  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43399
	I0127 13:29:28.575096  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:28.575836  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:28.576253  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .DriverName
	I0127 13:29:28.576355  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:29:28.576375  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:28.577245  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:28.577419  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetState
	I0127 13:29:28.579103  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .DriverName
	I0127 13:29:28.579756  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:29:28.579779  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:28.580518  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42715
	I0127 13:29:28.580886  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:28.581173  529417 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 13:29:28.581406  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:29:28.581423  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:28.581695  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:28.581855  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetState
	I0127 13:29:28.582619  529417 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:29:28.583309  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .DriverName
	I0127 13:29:28.583662  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:28.584326  529417 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:29:28.584346  529417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 13:29:28.584368  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHHostname
	I0127 13:29:28.587322  529417 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 13:29:28.587999  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:28.588047  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:28.591379  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHPort
	I0127 13:29:28.591427  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:29:28.591456  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6c", ip: ""} in network mk-default-k8s-diff-port-325510: {Iface:virbr3 ExpiryTime:2025-01-27 14:24:40 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6c Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-325510 Clientid:01:52:54:00:c4:f9:6c}
	I0127 13:29:28.591496  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined IP address 192.168.61.7 and MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:29:28.591585  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHKeyPath
	I0127 13:29:28.591752  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHUsername
	I0127 13:29:28.591911  529417 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/default-k8s-diff-port-325510/id_rsa Username:docker}
	I0127 13:29:28.592584  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 13:29:28.592601  529417 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 13:29:28.592621  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHHostname
	I0127 13:29:28.593660  529417 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 13:29:28.595128  529417 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 13:29:28.595147  529417 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 13:29:28.595179  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHHostname
	I0127 13:29:28.596235  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:29:28.597222  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6c", ip: ""} in network mk-default-k8s-diff-port-325510: {Iface:virbr3 ExpiryTime:2025-01-27 14:24:40 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6c Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-325510 Clientid:01:52:54:00:c4:f9:6c}
	I0127 13:29:28.597304  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined IP address 192.168.61.7 and MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:29:28.597628  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHPort
	I0127 13:29:28.597788  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHKeyPath
	I0127 13:29:28.597943  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHUsername
	I0127 13:29:28.598078  529417 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/default-k8s-diff-port-325510/id_rsa Username:docker}
	I0127 13:29:28.599130  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:29:28.599670  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6c", ip: ""} in network mk-default-k8s-diff-port-325510: {Iface:virbr3 ExpiryTime:2025-01-27 14:24:40 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6c Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-325510 Clientid:01:52:54:00:c4:f9:6c}
	I0127 13:29:28.599694  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined IP address 192.168.61.7 and MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:29:28.599880  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHPort
	I0127 13:29:28.600049  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHKeyPath
	I0127 13:29:28.600195  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHUsername
	I0127 13:29:28.600327  529417 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/default-k8s-diff-port-325510/id_rsa Username:docker}
	I0127 13:29:28.610825  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39231
	I0127 13:29:28.611379  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:28.611919  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:29:28.611939  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:28.612288  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:28.612480  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetState
	I0127 13:29:28.614326  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .DriverName
	I0127 13:29:28.614636  529417 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 13:29:28.614668  529417 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 13:29:28.614688  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHHostname
	I0127 13:29:28.618088  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:29:28.618805  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6c", ip: ""} in network mk-default-k8s-diff-port-325510: {Iface:virbr3 ExpiryTime:2025-01-27 14:24:40 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6c Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-325510 Clientid:01:52:54:00:c4:f9:6c}
	I0127 13:29:28.618958  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined IP address 192.168.61.7 and MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:29:28.619294  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHPort
	I0127 13:29:28.619517  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHKeyPath
	I0127 13:29:28.619738  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHUsername
	I0127 13:29:28.619953  529417 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/default-k8s-diff-port-325510/id_rsa Username:docker}
	I0127 13:29:28.750007  529417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:29:28.770798  529417 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-325510" to be "Ready" ...
	I0127 13:29:28.794753  529417 node_ready.go:49] node "default-k8s-diff-port-325510" has status "Ready":"True"
	I0127 13:29:28.794783  529417 node_ready.go:38] duration metric: took 23.945006ms for node "default-k8s-diff-port-325510" to be "Ready" ...
	I0127 13:29:28.794796  529417 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:29:28.801618  529417 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:28.841055  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 13:29:28.841089  529417 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 13:29:28.865445  529417 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 13:29:28.865479  529417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 13:29:28.870120  529417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:29:28.887649  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 13:29:28.887691  529417 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 13:29:28.908488  529417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 13:29:28.926717  529417 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 13:29:28.926752  529417 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 13:29:28.949234  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 13:29:28.949269  529417 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 13:29:28.983403  529417 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:29:28.983438  529417 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 13:29:29.010532  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 13:29:29.010567  529417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 13:29:29.085215  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 13:29:29.085250  529417 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 13:29:29.085479  529417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:29:29.180902  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 13:29:29.180935  529417 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 13:29:29.239792  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 13:29:29.239830  529417 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 13:29:29.350534  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 13:29:29.350566  529417 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 13:29:29.463271  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:29:29.463315  529417 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 13:29:29.551176  529417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:29:30.055621  529417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.147081618s)
	I0127 13:29:30.055704  529417 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:30.055723  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
	I0127 13:29:30.056191  529417 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:30.056215  529417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:30.056226  529417 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:30.056255  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | Closing plugin on server side
	I0127 13:29:30.056323  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
	I0127 13:29:30.056341  529417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.18618522s)
	I0127 13:29:30.056436  529417 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:30.056465  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
	I0127 13:29:30.056627  529417 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:30.056649  529417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:30.056963  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | Closing plugin on server side
	I0127 13:29:30.058774  529417 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:30.058792  529417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:30.058808  529417 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:30.058817  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
	I0127 13:29:30.059068  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | Closing plugin on server side
	I0127 13:29:30.059083  529417 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:30.059098  529417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:30.083977  529417 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:30.084003  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
	I0127 13:29:30.084571  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | Closing plugin on server side
	I0127 13:29:30.084583  529417 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:30.084595  529417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:30.830919  529417 pod_ready.go:103] pod "etcd-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:30.961132  529417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.875594685s)
	I0127 13:29:30.961202  529417 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:30.961219  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
	I0127 13:29:30.963600  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | Closing plugin on server side
	I0127 13:29:30.963608  529417 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:30.963645  529417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:30.963654  529417 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:30.963662  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
	I0127 13:29:30.964368  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | Closing plugin on server side
	I0127 13:29:30.964392  529417 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:30.964451  529417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:30.964463  529417 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-325510"
	I0127 13:29:32.478187  529417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.926948394s)
	I0127 13:29:32.478257  529417 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:32.478272  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
	I0127 13:29:32.478650  529417 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:32.478671  529417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:32.478683  529417 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:32.478693  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
	I0127 13:29:32.479015  529417 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:32.479033  529417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:32.482147  529417 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-325510 addons enable metrics-server
	
	I0127 13:29:32.483736  529417 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 13:29:32.484840  529417 addons.go:514] duration metric: took 3.958103252s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 13:29:33.314653  529417 pod_ready.go:103] pod "etcd-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:34.308087  529417 pod_ready.go:93] pod "etcd-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"True"
	I0127 13:29:34.308114  529417 pod_ready.go:82] duration metric: took 5.506466228s for pod "etcd-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:34.308126  529417 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:34.314009  529417 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"True"
	I0127 13:29:34.314033  529417 pod_ready.go:82] duration metric: took 5.900062ms for pod "kube-apiserver-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:34.314044  529417 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:34.321801  529417 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"True"
	I0127 13:29:34.321823  529417 pod_ready.go:82] duration metric: took 7.77255ms for pod "kube-controller-manager-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:34.321836  529417 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:36.328661  529417 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:38.833405  529417 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:39.331942  529417 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"True"
	I0127 13:29:39.331971  529417 pod_ready.go:82] duration metric: took 5.010119744s for pod "kube-scheduler-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:39.331983  529417 pod_ready.go:39] duration metric: took 10.537174991s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:29:39.332004  529417 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:29:39.332061  529417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:29:39.364826  529417 api_server.go:72] duration metric: took 10.838138782s to wait for apiserver process to appear ...
	I0127 13:29:39.364856  529417 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:29:39.364880  529417 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0127 13:29:39.395339  529417 api_server.go:279] https://192.168.61.7:8444/healthz returned 200:
	ok
	I0127 13:29:39.403463  529417 api_server.go:141] control plane version: v1.32.1
	I0127 13:29:39.403502  529417 api_server.go:131] duration metric: took 38.63787ms to wait for apiserver health ...
	I0127 13:29:39.403515  529417 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:29:39.428974  529417 system_pods.go:59] 9 kube-system pods found
	I0127 13:29:39.429008  529417 system_pods.go:61] "coredns-668d6bf9bc-mgxmm" [15f65844-c002-4253-9f43-609e6d3d86c0] Running
	I0127 13:29:39.429013  529417 system_pods.go:61] "coredns-668d6bf9bc-rlvv2" [b116f02c-d30f-4869-bef1-55722f0f1a58] Running
	I0127 13:29:39.429016  529417 system_pods.go:61] "etcd-default-k8s-diff-port-325510" [88fd4825-b74c-43e0-8a3e-fd60bb409b76] Running
	I0127 13:29:39.429021  529417 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-325510" [4eeff905-b36f-4be8-ac24-77c8421495c4] Running
	I0127 13:29:39.429024  529417 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-325510" [07956b85-b521-44cc-be77-675703803a17] Running
	I0127 13:29:39.429027  529417 system_pods.go:61] "kube-proxy-gb24h" [d0d50b9f-b02f-49dd-9a7a-78e202ce247a] Running
	I0127 13:29:39.429031  529417 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-325510" [a7c2c0c5-c386-454d-9542-852b02901060] Running
	I0127 13:29:39.429037  529417 system_pods.go:61] "metrics-server-f79f97bbb-vtvnn" [07e0c335-6a2b-4ef3-b153-3689cdb7ccaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 13:29:39.429041  529417 system_pods.go:61] "storage-provisioner" [7b76ca76-2bfc-44c4-bfc3-5ac3f4cde72b] Running
	I0127 13:29:39.429048  529417 system_pods.go:74] duration metric: took 25.526569ms to wait for pod list to return data ...
	I0127 13:29:39.429056  529417 default_sa.go:34] waiting for default service account to be created ...
	I0127 13:29:39.449041  529417 default_sa.go:45] found service account: "default"
	I0127 13:29:39.449083  529417 default_sa.go:55] duration metric: took 20.019081ms for default service account to be created ...
	I0127 13:29:39.449098  529417 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 13:29:39.468326  529417 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p default-k8s-diff-port-325510 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-325510 -n default-k8s-diff-port-325510
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-325510 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-325510 logs -n 25: (1.331753678s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p no-preload-325431                  | no-preload-325431            | jenkins | v1.35.0 | 27 Jan 25 13:23 UTC | 27 Jan 25 13:23 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-325431                                   | no-preload-325431            | jenkins | v1.35.0 | 27 Jan 25 13:23 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-766944                 | embed-certs-766944           | jenkins | v1.35.0 | 27 Jan 25 13:24 UTC | 27 Jan 25 13:24 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-766944                                  | embed-certs-766944           | jenkins | v1.35.0 | 27 Jan 25 13:24 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-325510       | default-k8s-diff-port-325510 | jenkins | v1.35.0 | 27 Jan 25 13:24 UTC | 27 Jan 25 13:24 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-325510 | jenkins | v1.35.0 | 27 Jan 25 13:24 UTC |                     |
	|         | default-k8s-diff-port-325510                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-116657             | old-k8s-version-116657       | jenkins | v1.35.0 | 27 Jan 25 13:24 UTC | 27 Jan 25 13:24 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-116657                              | old-k8s-version-116657       | jenkins | v1.35.0 | 27 Jan 25 13:24 UTC | 27 Jan 25 13:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | old-k8s-version-116657 image                           | old-k8s-version-116657       | jenkins | v1.35.0 | 27 Jan 25 13:27 UTC | 27 Jan 25 13:27 UTC |
	|         | list --format=json                                     |                              |         |         |                     |                     |
	| pause   | -p old-k8s-version-116657                              | old-k8s-version-116657       | jenkins | v1.35.0 | 27 Jan 25 13:27 UTC | 27 Jan 25 13:27 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p old-k8s-version-116657                              | old-k8s-version-116657       | jenkins | v1.35.0 | 27 Jan 25 13:27 UTC | 27 Jan 25 13:27 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-116657                              | old-k8s-version-116657       | jenkins | v1.35.0 | 27 Jan 25 13:27 UTC | 27 Jan 25 13:27 UTC |
	| delete  | -p old-k8s-version-116657                              | old-k8s-version-116657       | jenkins | v1.35.0 | 27 Jan 25 13:27 UTC | 27 Jan 25 13:27 UTC |
	| start   | -p newest-cni-296225 --memory=2200 --alsologtostderr   | newest-cni-296225            | jenkins | v1.35.0 | 27 Jan 25 13:27 UTC | 27 Jan 25 13:28 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=containerd          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-296225             | newest-cni-296225            | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC | 27 Jan 25 13:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-296225                                   | newest-cni-296225            | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC | 27 Jan 25 13:28 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-296225                  | newest-cni-296225            | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC | 27 Jan 25 13:28 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-296225 --memory=2200 --alsologtostderr   | newest-cni-296225            | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC | 27 Jan 25 13:29 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=containerd          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-296225 image list                           | newest-cni-296225            | jenkins | v1.35.0 | 27 Jan 25 13:29 UTC | 27 Jan 25 13:29 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-296225                                   | newest-cni-296225            | jenkins | v1.35.0 | 27 Jan 25 13:29 UTC | 27 Jan 25 13:29 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-296225                                   | newest-cni-296225            | jenkins | v1.35.0 | 27 Jan 25 13:29 UTC | 27 Jan 25 13:29 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-296225                                   | newest-cni-296225            | jenkins | v1.35.0 | 27 Jan 25 13:29 UTC | 27 Jan 25 13:29 UTC |
	| delete  | -p newest-cni-296225                                   | newest-cni-296225            | jenkins | v1.35.0 | 27 Jan 25 13:29 UTC | 27 Jan 25 13:29 UTC |
	| delete  | -p no-preload-325431                                   | no-preload-325431            | jenkins | v1.35.0 | 27 Jan 25 13:50 UTC | 27 Jan 25 13:50 UTC |
	| delete  | -p embed-certs-766944                                  | embed-certs-766944           | jenkins | v1.35.0 | 27 Jan 25 13:51 UTC | 27 Jan 25 13:51 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 13:28:56
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 13:28:56.167206  531586 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:28:56.167420  531586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:28:56.167436  531586 out.go:358] Setting ErrFile to fd 2...
	I0127 13:28:56.167442  531586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:28:56.167737  531586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-466901/.minikube/bin
	I0127 13:28:56.168827  531586 out.go:352] Setting JSON to false
	I0127 13:28:56.169977  531586 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":36633,"bootTime":1737947903,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 13:28:56.170093  531586 start.go:139] virtualization: kvm guest
	I0127 13:28:56.172461  531586 out.go:177] * [newest-cni-296225] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 13:28:56.174020  531586 notify.go:220] Checking for updates...
	I0127 13:28:56.174033  531586 out.go:177]   - MINIKUBE_LOCATION=20317
	I0127 13:28:56.175512  531586 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:28:56.176838  531586 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20317-466901/kubeconfig
	I0127 13:28:56.178184  531586 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-466901/.minikube
	I0127 13:28:56.179518  531586 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 13:28:56.180891  531586 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 13:28:56.182708  531586 config.go:182] Loaded profile config "newest-cni-296225": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 13:28:56.183131  531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:28:56.183194  531586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:28:56.200308  531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43915
	I0127 13:28:56.201060  531586 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:28:56.201765  531586 main.go:141] libmachine: Using API Version  1
	I0127 13:28:56.201797  531586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:28:56.202181  531586 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:28:56.202408  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:28:56.202728  531586 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 13:28:56.203250  531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:28:56.203319  531586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:28:56.220011  531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33981
	I0127 13:28:56.220435  531586 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:28:56.220978  531586 main.go:141] libmachine: Using API Version  1
	I0127 13:28:56.221006  531586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:28:56.221409  531586 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:28:56.221606  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:28:56.258580  531586 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 13:28:56.260066  531586 start.go:297] selected driver: kvm2
	I0127 13:28:56.260097  531586 start.go:901] validating driver "kvm2" against &{Name:newest-cni-296225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-296225 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.46 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenA
ddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:28:56.260225  531586 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 13:28:56.260938  531586 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:28:56.261024  531586 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20317-466901/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 13:28:56.277111  531586 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 13:28:56.277523  531586 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 13:28:56.277560  531586 cni.go:84] Creating CNI manager for ""
	I0127 13:28:56.277605  531586 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 13:28:56.277639  531586 start.go:340] cluster config:
	{Name:newest-cni-296225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-296225 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.46 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:28:56.277740  531586 iso.go:125] acquiring lock: {Name:mkcc3db98c9d4661e75c49bd9b203d0232dff8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:28:56.280361  531586 out.go:177] * Starting "newest-cni-296225" primary control-plane node in "newest-cni-296225" cluster
	I0127 13:28:56.281606  531586 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 13:28:56.281678  531586 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20317-466901/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
	I0127 13:28:56.281692  531586 cache.go:56] Caching tarball of preloaded images
	I0127 13:28:56.281783  531586 preload.go:172] Found /home/jenkins/minikube-integration/20317-466901/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 13:28:56.281796  531586 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
	I0127 13:28:56.281935  531586 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225/config.json ...
	I0127 13:28:56.282191  531586 start.go:360] acquireMachinesLock for newest-cni-296225: {Name:mke115b779db52cb0a5f0a05f83d5bad0a35c561 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 13:28:56.282273  531586 start.go:364] duration metric: took 45.538µs to acquireMachinesLock for "newest-cni-296225"
	I0127 13:28:56.282297  531586 start.go:96] Skipping create...Using existing machine configuration
	I0127 13:28:56.282306  531586 fix.go:54] fixHost starting: 
	I0127 13:28:56.282589  531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:28:56.282621  531586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:28:56.298876  531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38851
	I0127 13:28:56.299391  531586 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:28:56.299946  531586 main.go:141] libmachine: Using API Version  1
	I0127 13:28:56.299975  531586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:28:56.300339  531586 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:28:56.300605  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:28:56.300813  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetState
	I0127 13:28:56.302631  531586 fix.go:112] recreateIfNeeded on newest-cni-296225: state=Stopped err=<nil>
	I0127 13:28:56.302659  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	W0127 13:28:56.302822  531586 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 13:28:56.304762  531586 out.go:177] * Restarting existing kvm2 VM for "newest-cni-296225" ...
	I0127 13:28:53.806392  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:55.806518  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:57.808012  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:55.406991  529251 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.84407049s)
	I0127 13:28:55.407062  529251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:28:55.426120  529251 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 13:28:55.438195  529251 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:28:55.457399  529251 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:28:55.457425  529251 kubeadm.go:157] found existing configuration files:
	
	I0127 13:28:55.457485  529251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:28:55.469544  529251 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:28:55.469611  529251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:28:55.481065  529251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:28:55.492868  529251 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:28:55.492928  529251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:28:55.505930  529251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:28:55.517268  529251 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:28:55.517332  529251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:28:55.528681  529251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:28:55.539678  529251 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:28:55.539755  529251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:28:55.550987  529251 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 13:28:55.719870  529251 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 13:28:56.306046  531586 main.go:141] libmachine: (newest-cni-296225) Calling .Start
	I0127 13:28:56.306254  531586 main.go:141] libmachine: (newest-cni-296225) starting domain...
	I0127 13:28:56.306277  531586 main.go:141] libmachine: (newest-cni-296225) ensuring networks are active...
	I0127 13:28:56.307157  531586 main.go:141] libmachine: (newest-cni-296225) Ensuring network default is active
	I0127 13:28:56.307587  531586 main.go:141] libmachine: (newest-cni-296225) Ensuring network mk-newest-cni-296225 is active
	I0127 13:28:56.307960  531586 main.go:141] libmachine: (newest-cni-296225) getting domain XML...
	I0127 13:28:56.308646  531586 main.go:141] libmachine: (newest-cni-296225) creating domain...
	I0127 13:28:57.604425  531586 main.go:141] libmachine: (newest-cni-296225) waiting for IP...
	I0127 13:28:57.605479  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:28:57.606123  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:28:57.606254  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:28:57.606079  531622 retry.go:31] will retry after 235.333873ms: waiting for domain to come up
	I0127 13:28:57.843349  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:28:57.843843  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:28:57.843877  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:28:57.843796  531622 retry.go:31] will retry after 261.244379ms: waiting for domain to come up
	I0127 13:28:58.107236  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:28:58.107847  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:28:58.107885  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:28:58.107815  531622 retry.go:31] will retry after 367.467141ms: waiting for domain to come up
	I0127 13:28:58.477662  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:28:58.478416  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:28:58.478454  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:28:58.478385  531622 retry.go:31] will retry after 466.451127ms: waiting for domain to come up
	I0127 13:28:58.946239  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:28:58.946809  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:28:58.946854  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:28:58.946766  531622 retry.go:31] will retry after 559.614953ms: waiting for domain to come up
	I0127 13:28:59.507817  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:28:59.508251  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:28:59.508317  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:28:59.508231  531622 retry.go:31] will retry after 651.013274ms: waiting for domain to come up
	I0127 13:29:00.161338  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:00.161916  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:29:00.161944  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:29:00.161879  531622 retry.go:31] will retry after 780.526485ms: waiting for domain to come up
	I0127 13:29:00.944251  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:00.944845  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:29:00.944875  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:29:00.944817  531622 retry.go:31] will retry after 1.304098s: waiting for domain to come up
	I0127 13:28:59.808090  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:01.808480  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:04.273698  529251 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 13:29:04.273779  529251 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 13:29:04.273879  529251 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 13:29:04.274011  529251 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 13:29:04.274137  529251 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 13:29:04.274229  529251 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 13:29:04.275837  529251 out.go:235]   - Generating certificates and keys ...
	I0127 13:29:04.275953  529251 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 13:29:04.276042  529251 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 13:29:04.276162  529251 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 13:29:04.276253  529251 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 13:29:04.276359  529251 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 13:29:04.276440  529251 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 13:29:04.276535  529251 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 13:29:04.276675  529251 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 13:29:04.276764  529251 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 13:29:04.276906  529251 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 13:29:04.276967  529251 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 13:29:04.277065  529251 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 13:29:04.277113  529251 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 13:29:04.277186  529251 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 13:29:04.277274  529251 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 13:29:04.277381  529251 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 13:29:04.277460  529251 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 13:29:04.277559  529251 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 13:29:04.277647  529251 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 13:29:04.280280  529251 out.go:235]   - Booting up control plane ...
	I0127 13:29:04.280412  529251 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 13:29:04.280494  529251 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 13:29:04.280588  529251 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 13:29:04.280708  529251 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 13:29:04.280854  529251 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 13:29:04.280919  529251 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 13:29:04.281101  529251 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 13:29:04.281252  529251 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 13:29:04.281343  529251 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002900104s
	I0127 13:29:04.281472  529251 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 13:29:04.281557  529251 kubeadm.go:310] [api-check] The API server is healthy after 5.001737119s
	I0127 13:29:04.281687  529251 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 13:29:04.281880  529251 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 13:29:04.281947  529251 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 13:29:04.282181  529251 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-766944 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 13:29:04.282286  529251 kubeadm.go:310] [bootstrap-token] Using token: cubj1b.pwpdo0hgbjp08kat
	I0127 13:29:04.283697  529251 out.go:235]   - Configuring RBAC rules ...
	I0127 13:29:04.283851  529251 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 13:29:04.283970  529251 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 13:29:04.284120  529251 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 13:29:04.284293  529251 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 13:29:04.284399  529251 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 13:29:04.284473  529251 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 13:29:04.284576  529251 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 13:29:04.284615  529251 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 13:29:04.284679  529251 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 13:29:04.284689  529251 kubeadm.go:310] 
	I0127 13:29:04.284780  529251 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 13:29:04.284794  529251 kubeadm.go:310] 
	I0127 13:29:04.284891  529251 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 13:29:04.284900  529251 kubeadm.go:310] 
	I0127 13:29:04.284950  529251 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 13:29:04.285047  529251 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 13:29:04.285134  529251 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 13:29:04.285146  529251 kubeadm.go:310] 
	I0127 13:29:04.285267  529251 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 13:29:04.285279  529251 kubeadm.go:310] 
	I0127 13:29:04.285341  529251 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 13:29:04.285356  529251 kubeadm.go:310] 
	I0127 13:29:04.285410  529251 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 13:29:04.285478  529251 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 13:29:04.285536  529251 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 13:29:04.285542  529251 kubeadm.go:310] 
	I0127 13:29:04.285636  529251 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 13:29:04.285723  529251 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 13:29:04.285731  529251 kubeadm.go:310] 
	I0127 13:29:04.285803  529251 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token cubj1b.pwpdo0hgbjp08kat \
	I0127 13:29:04.285958  529251 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:44e7ea386d1f8e7ab1336d835156dd84ecca20069390afc88f04bb1a3c629fd2 \
	I0127 13:29:04.285997  529251 kubeadm.go:310] 	--control-plane 
	I0127 13:29:04.286004  529251 kubeadm.go:310] 
	I0127 13:29:04.286115  529251 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 13:29:04.286121  529251 kubeadm.go:310] 
	I0127 13:29:04.286247  529251 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cubj1b.pwpdo0hgbjp08kat \
	I0127 13:29:04.286407  529251 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:44e7ea386d1f8e7ab1336d835156dd84ecca20069390afc88f04bb1a3c629fd2 
	I0127 13:29:04.286424  529251 cni.go:84] Creating CNI manager for ""
	I0127 13:29:04.286436  529251 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 13:29:04.288049  529251 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 13:29:02.250183  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:02.250724  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:29:02.250759  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:29:02.250691  531622 retry.go:31] will retry after 1.464046224s: waiting for domain to come up
	I0127 13:29:03.716441  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:03.716968  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:29:03.716995  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:29:03.716911  531622 retry.go:31] will retry after 1.473613486s: waiting for domain to come up
	I0127 13:29:05.192629  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:05.193220  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:29:05.193256  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:29:05.193184  531622 retry.go:31] will retry after 1.906374841s: waiting for domain to come up
	I0127 13:29:04.289218  529251 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 13:29:04.306228  529251 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 13:29:04.327835  529251 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 13:29:04.328008  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:04.328068  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-766944 minikube.k8s.io/updated_at=2025_01_27T13_29_04_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=0d71ce9b1959d04f0d9fa7dbc5639a49619ad89b minikube.k8s.io/name=embed-certs-766944 minikube.k8s.io/primary=true
	I0127 13:29:04.340778  529251 ops.go:34] apiserver oom_adj: -16
	I0127 13:29:04.617241  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:05.117682  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:05.618141  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:06.117679  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:06.618036  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:07.118302  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:07.618303  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:08.117464  529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:08.221604  529251 kubeadm.go:1113] duration metric: took 3.893670046s to wait for elevateKubeSystemPrivileges
	I0127 13:29:08.221659  529251 kubeadm.go:394] duration metric: took 4m36.506709461s to StartCluster
	I0127 13:29:08.221687  529251 settings.go:142] acquiring lock: {Name:mk070ebf22d35da2704f00750921836dbd2cd121 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:29:08.221784  529251 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20317-466901/kubeconfig
	I0127 13:29:08.223152  529251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-466901/kubeconfig: {Name:mkc116eec378af43ea8fefe45e11af3e19be85bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:29:08.223468  529251 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 13:29:08.223561  529251 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 13:29:08.223686  529251 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-766944"
	I0127 13:29:08.223707  529251 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-766944"
	W0127 13:29:08.223715  529251 addons.go:247] addon storage-provisioner should already be in state true
	I0127 13:29:08.223720  529251 addons.go:69] Setting default-storageclass=true in profile "embed-certs-766944"
	I0127 13:29:08.223775  529251 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-766944"
	I0127 13:29:08.223766  529251 addons.go:69] Setting dashboard=true in profile "embed-certs-766944"
	I0127 13:29:08.223766  529251 addons.go:69] Setting metrics-server=true in profile "embed-certs-766944"
	I0127 13:29:08.223788  529251 config.go:182] Loaded profile config "embed-certs-766944": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 13:29:08.223797  529251 addons.go:238] Setting addon dashboard=true in "embed-certs-766944"
	I0127 13:29:08.223800  529251 addons.go:238] Setting addon metrics-server=true in "embed-certs-766944"
	W0127 13:29:08.223808  529251 addons.go:247] addon metrics-server should already be in state true
	W0127 13:29:08.223808  529251 addons.go:247] addon dashboard should already be in state true
	I0127 13:29:08.223748  529251 host.go:66] Checking if "embed-certs-766944" exists ...
	I0127 13:29:08.223840  529251 host.go:66] Checking if "embed-certs-766944" exists ...
	I0127 13:29:08.223862  529251 host.go:66] Checking if "embed-certs-766944" exists ...
	I0127 13:29:08.224260  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:08.224276  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:08.224288  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:08.224294  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:08.224311  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:08.224322  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:08.224276  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:08.224390  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:08.225260  529251 out.go:177] * Verifying Kubernetes components...
	I0127 13:29:08.226552  529251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:29:08.244300  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46203
	I0127 13:29:08.244514  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46005
	I0127 13:29:08.244516  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39053
	I0127 13:29:08.245012  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:08.245254  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:08.245333  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:08.245603  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:29:08.245621  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:08.245769  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:29:08.245780  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:08.245787  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:29:08.245804  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:08.246187  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:08.246236  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:08.246240  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:08.246450  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetState
	I0127 13:29:08.246858  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:08.246858  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:08.246898  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:08.246908  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:08.246957  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41683
	I0127 13:29:08.247392  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:08.248029  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:29:08.248055  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:08.248479  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:08.249163  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:08.249212  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:08.251401  529251 addons.go:238] Setting addon default-storageclass=true in "embed-certs-766944"
	W0127 13:29:08.251426  529251 addons.go:247] addon default-storageclass should already be in state true
	I0127 13:29:08.251459  529251 host.go:66] Checking if "embed-certs-766944" exists ...
	I0127 13:29:08.251834  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:08.251888  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:08.268388  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36109
	I0127 13:29:08.268957  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:08.269472  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:29:08.269488  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:08.269556  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43787
	I0127 13:29:08.269902  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:08.270014  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:08.270112  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetState
	I0127 13:29:08.270466  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:29:08.270483  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:08.270877  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:08.271178  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetState
	I0127 13:29:08.272419  529251 main.go:141] libmachine: (embed-certs-766944) Calling .DriverName
	I0127 13:29:08.273919  529251 main.go:141] libmachine: (embed-certs-766944) Calling .DriverName
	I0127 13:29:08.274603  529251 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 13:29:08.275601  529251 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:29:08.276632  529251 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 13:29:08.276650  529251 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 13:29:08.276675  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHHostname
	I0127 13:29:08.277578  529251 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:29:08.277591  529251 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 13:29:08.277605  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHHostname
	I0127 13:29:08.278681  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36115
	I0127 13:29:08.279322  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:08.280065  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:29:08.280083  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:08.280587  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:08.280859  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetState
	I0127 13:29:08.282532  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:29:08.282997  529251 main.go:141] libmachine: (embed-certs-766944) Calling .DriverName
	I0127 13:29:08.283505  529251 main.go:141] libmachine: (embed-certs-766944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4a:1b", ip: ""} in network mk-embed-certs-766944: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:21 +0000 UTC Type:0 Mac:52:54:00:73:4a:1b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:embed-certs-766944 Clientid:01:52:54:00:73:4a:1b}
	I0127 13:29:08.283533  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined IP address 192.168.39.24 and MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:29:08.283908  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHPort
	I0127 13:29:08.284083  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHKeyPath
	I0127 13:29:08.284241  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHUsername
	I0127 13:29:08.284285  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:29:08.284416  529251 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/embed-certs-766944/id_rsa Username:docker}
	I0127 13:29:08.284808  529251 main.go:141] libmachine: (embed-certs-766944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4a:1b", ip: ""} in network mk-embed-certs-766944: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:21 +0000 UTC Type:0 Mac:52:54:00:73:4a:1b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:embed-certs-766944 Clientid:01:52:54:00:73:4a:1b}
	I0127 13:29:08.284841  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined IP address 192.168.39.24 and MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:29:08.284853  529251 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 13:29:03.808549  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:05.809379  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:08.287154  529251 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 13:29:08.287385  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHPort
	I0127 13:29:08.287589  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHKeyPath
	I0127 13:29:08.287760  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHUsername
	I0127 13:29:08.287917  529251 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/embed-certs-766944/id_rsa Username:docker}
	I0127 13:29:08.288316  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 13:29:08.288338  529251 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 13:29:08.288353  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHHostname
	I0127 13:29:08.292370  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:29:08.292819  529251 main.go:141] libmachine: (embed-certs-766944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4a:1b", ip: ""} in network mk-embed-certs-766944: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:21 +0000 UTC Type:0 Mac:52:54:00:73:4a:1b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:embed-certs-766944 Clientid:01:52:54:00:73:4a:1b}
	I0127 13:29:08.292844  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined IP address 192.168.39.24 and MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:29:08.293148  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHPort
	I0127 13:29:08.293268  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41237
	I0127 13:29:08.293441  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHKeyPath
	I0127 13:29:08.293632  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHUsername
	I0127 13:29:08.293671  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:08.293763  529251 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/embed-certs-766944/id_rsa Username:docker}
	I0127 13:29:08.294180  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:29:08.294204  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:08.294614  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:08.295134  529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:08.295170  529251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:08.312630  529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35019
	I0127 13:29:08.313201  529251 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:08.314043  529251 main.go:141] libmachine: Using API Version  1
	I0127 13:29:08.314071  529251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:08.315352  529251 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:08.315586  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetState
	I0127 13:29:08.317764  529251 main.go:141] libmachine: (embed-certs-766944) Calling .DriverName
	I0127 13:29:08.318043  529251 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 13:29:08.318064  529251 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 13:29:08.318087  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHHostname
	I0127 13:29:08.321585  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:29:08.322028  529251 main.go:141] libmachine: (embed-certs-766944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4a:1b", ip: ""} in network mk-embed-certs-766944: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:21 +0000 UTC Type:0 Mac:52:54:00:73:4a:1b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:embed-certs-766944 Clientid:01:52:54:00:73:4a:1b}
	I0127 13:29:08.322057  529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined IP address 192.168.39.24 and MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
	I0127 13:29:08.322200  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHPort
	I0127 13:29:08.322476  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHKeyPath
	I0127 13:29:08.322607  529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHUsername
	I0127 13:29:08.322797  529251 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/embed-certs-766944/id_rsa Username:docker}
	I0127 13:29:08.543349  529251 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:29:08.566526  529251 node_ready.go:35] waiting up to 6m0s for node "embed-certs-766944" to be "Ready" ...
	I0127 13:29:08.581029  529251 node_ready.go:49] node "embed-certs-766944" has status "Ready":"True"
	I0127 13:29:08.581058  529251 node_ready.go:38] duration metric: took 14.437055ms for node "embed-certs-766944" to be "Ready" ...
	I0127 13:29:08.581072  529251 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:29:08.591111  529251 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:08.663492  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 13:29:08.663529  529251 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 13:29:08.708763  529251 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:29:08.731924  529251 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 13:29:08.733763  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 13:29:08.733792  529251 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 13:29:08.816600  529251 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 13:29:08.816646  529251 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 13:29:08.862311  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 13:29:08.862346  529251 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 13:29:08.881791  529251 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 13:29:08.881830  529251 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 13:29:08.965427  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 13:29:08.965468  529251 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 13:29:09.025682  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 13:29:09.025718  529251 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 13:29:09.026871  529251 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:29:09.026896  529251 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 13:29:09.106376  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 13:29:09.106408  529251 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 13:29:09.173153  529251 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:29:07.101069  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:07.101691  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:29:07.101724  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:29:07.101645  531622 retry.go:31] will retry after 3.3503886s: waiting for domain to come up
	I0127 13:29:10.454092  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:10.454611  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:29:10.454643  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:29:10.454550  531622 retry.go:31] will retry after 2.977667559s: waiting for domain to come up
	I0127 13:29:09.316157  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 13:29:09.316202  529251 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 13:29:09.518415  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 13:29:09.518455  529251 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 13:29:09.836886  529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:29:09.836931  529251 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 13:29:09.974913  529251 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:29:10.529287  529251 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.820478856s)
	I0127 13:29:10.529346  529251 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.797380034s)
	I0127 13:29:10.529398  529251 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:10.529415  529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
	I0127 13:29:10.529355  529251 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:10.529488  529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
	I0127 13:29:10.529871  529251 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:10.529910  529251 main.go:141] libmachine: (embed-certs-766944) DBG | Closing plugin on server side
	I0127 13:29:10.529932  529251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:10.529943  529251 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:10.529951  529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
	I0127 13:29:10.529878  529251 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:10.530045  529251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:10.530070  529251 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:10.530088  529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
	I0127 13:29:10.530265  529251 main.go:141] libmachine: (embed-certs-766944) DBG | Closing plugin on server side
	I0127 13:29:10.530268  529251 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:10.530299  529251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:10.530463  529251 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:10.530482  529251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:10.599533  529251 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:10.599626  529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
	I0127 13:29:10.599978  529251 main.go:141] libmachine: (embed-certs-766944) DBG | Closing plugin on server side
	I0127 13:29:10.600095  529251 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:10.600128  529251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:10.613397  529251 pod_ready.go:103] pod "etcd-embed-certs-766944" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:11.025503  529251 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.852294623s)
	I0127 13:29:11.025583  529251 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:11.025598  529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
	I0127 13:29:11.025974  529251 main.go:141] libmachine: (embed-certs-766944) DBG | Closing plugin on server side
	I0127 13:29:11.026056  529251 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:11.026072  529251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:11.026081  529251 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:11.026094  529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
	I0127 13:29:11.026369  529251 main.go:141] libmachine: (embed-certs-766944) DBG | Closing plugin on server side
	I0127 13:29:11.026430  529251 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:11.026446  529251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:11.026465  529251 addons.go:479] Verifying addon metrics-server=true in "embed-certs-766944"
	I0127 13:29:11.846156  529251 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.871176785s)
	I0127 13:29:11.846235  529251 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:11.846258  529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
	I0127 13:29:11.846647  529251 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:11.846693  529251 main.go:141] libmachine: (embed-certs-766944) DBG | Closing plugin on server side
	I0127 13:29:11.846706  529251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:11.846720  529251 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:11.846730  529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
	I0127 13:29:11.847020  529251 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:11.847069  529251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:11.849004  529251 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-766944 addons enable metrics-server
	
	I0127 13:29:11.850858  529251 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 13:29:08.309241  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:10.806393  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:12.808038  529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:11.852345  529251 addons.go:514] duration metric: took 3.628795827s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 13:29:13.097655  529251 pod_ready.go:103] pod "etcd-embed-certs-766944" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:13.433798  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:13.434282  531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
	I0127 13:29:13.434324  531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:29:13.434271  531622 retry.go:31] will retry after 5.418420331s: waiting for domain to come up
	I0127 13:29:14.300254  529417 pod_ready.go:82] duration metric: took 4m0.000130065s for pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace to be "Ready" ...
	E0127 13:29:14.300291  529417 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0127 13:29:14.300324  529417 pod_ready.go:39] duration metric: took 4m12.210910321s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:29:14.300355  529417 kubeadm.go:597] duration metric: took 4m20.336267253s to restartPrimaryControlPlane
	W0127 13:29:14.300420  529417 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 13:29:14.300449  529417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 13:29:16.335301  529417 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.034816955s)
	I0127 13:29:16.335395  529417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:29:16.352998  529417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 13:29:16.365092  529417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:29:16.378733  529417 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:29:16.378758  529417 kubeadm.go:157] found existing configuration files:
	
	I0127 13:29:16.378804  529417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0127 13:29:16.395924  529417 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:29:16.396005  529417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:29:16.408496  529417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0127 13:29:16.418917  529417 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:29:16.418986  529417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:29:16.429065  529417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0127 13:29:16.439234  529417 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:29:16.439333  529417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:29:16.449865  529417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0127 13:29:16.460738  529417 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:29:16.460831  529417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:29:16.472411  529417 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 13:29:16.642625  529417 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 13:29:15.100860  529251 pod_ready.go:103] pod "etcd-embed-certs-766944" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:16.102026  529251 pod_ready.go:93] pod "etcd-embed-certs-766944" in "kube-system" namespace has status "Ready":"True"
	I0127 13:29:16.102064  529251 pod_ready.go:82] duration metric: took 7.510920671s for pod "etcd-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:16.102080  529251 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:16.108782  529251 pod_ready.go:93] pod "kube-apiserver-embed-certs-766944" in "kube-system" namespace has status "Ready":"True"
	I0127 13:29:16.108818  529251 pod_ready.go:82] duration metric: took 6.727536ms for pod "kube-apiserver-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:16.108832  529251 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:16.117964  529251 pod_ready.go:93] pod "kube-controller-manager-embed-certs-766944" in "kube-system" namespace has status "Ready":"True"
	I0127 13:29:16.117994  529251 pod_ready.go:82] duration metric: took 9.151947ms for pod "kube-controller-manager-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:16.118008  529251 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vp88s" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:16.125633  529251 pod_ready.go:93] pod "kube-proxy-vp88s" in "kube-system" namespace has status "Ready":"True"
	I0127 13:29:16.125657  529251 pod_ready.go:82] duration metric: took 7.641622ms for pod "kube-proxy-vp88s" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:16.125667  529251 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:16.141368  529251 pod_ready.go:93] pod "kube-scheduler-embed-certs-766944" in "kube-system" namespace has status "Ready":"True"
	I0127 13:29:16.141395  529251 pod_ready.go:82] duration metric: took 15.721182ms for pod "kube-scheduler-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:16.141403  529251 pod_ready.go:39] duration metric: took 7.560318089s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:29:16.141421  529251 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:29:16.141484  529251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:29:16.168318  529251 api_server.go:72] duration metric: took 7.944806249s to wait for apiserver process to appear ...
	I0127 13:29:16.168353  529251 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:29:16.168382  529251 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I0127 13:29:16.178242  529251 api_server.go:279] https://192.168.39.24:8443/healthz returned 200:
	ok
	I0127 13:29:16.179663  529251 api_server.go:141] control plane version: v1.32.1
	I0127 13:29:16.179696  529251 api_server.go:131] duration metric: took 11.33324ms to wait for apiserver health ...
	I0127 13:29:16.179706  529251 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:29:16.299895  529251 system_pods.go:59] 9 kube-system pods found
	I0127 13:29:16.299927  529251 system_pods.go:61] "coredns-668d6bf9bc-9h4k2" [0eb84d56-e399-4808-afda-b0e1ec4f201f] Running
	I0127 13:29:16.299933  529251 system_pods.go:61] "coredns-668d6bf9bc-wf444" [7afc402e-ab81-4eb5-b2cf-08be738f171d] Running
	I0127 13:29:16.299937  529251 system_pods.go:61] "etcd-embed-certs-766944" [22be64ef-9ba9-4750-aca9-f34b01b46f16] Running
	I0127 13:29:16.299941  529251 system_pods.go:61] "kube-apiserver-embed-certs-766944" [397082cc-acad-493c-8ddd-9f49def9100a] Running
	I0127 13:29:16.299945  529251 system_pods.go:61] "kube-controller-manager-embed-certs-766944" [fe84cf8b-7074-485b-a16e-d75b52b9fe15] Running
	I0127 13:29:16.299948  529251 system_pods.go:61] "kube-proxy-vp88s" [18e5bf87-73fb-43c4-a73e-b2f21a1bb7b8] Running
	I0127 13:29:16.299951  529251 system_pods.go:61] "kube-scheduler-embed-certs-766944" [96587dc6-6fbd-4d22-acfa-09a89f1e711a] Running
	I0127 13:29:16.299956  529251 system_pods.go:61] "metrics-server-f79f97bbb-27dz9" [9f604bd3-a953-4a12-b1bc-48e4e4c8bb4d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 13:29:16.299962  529251 system_pods.go:61] "storage-provisioner" [7d91f3a9-4b10-40fa-84bc-9d881d955319] Running
	I0127 13:29:16.299973  529251 system_pods.go:74] duration metric: took 120.259661ms to wait for pod list to return data ...
	I0127 13:29:16.299984  529251 default_sa.go:34] waiting for default service account to be created ...
	I0127 13:29:16.496603  529251 default_sa.go:45] found service account: "default"
	I0127 13:29:16.496645  529251 default_sa.go:55] duration metric: took 196.6512ms for default service account to be created ...
	I0127 13:29:16.496658  529251 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 13:29:16.702376  529251 system_pods.go:87] 9 kube-system pods found
	I0127 13:29:18.854257  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:18.854914  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has current primary IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:18.854944  531586 main.go:141] libmachine: (newest-cni-296225) found domain IP: 192.168.72.46
	I0127 13:29:18.854956  531586 main.go:141] libmachine: (newest-cni-296225) reserving static IP address...
	I0127 13:29:18.855436  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "newest-cni-296225", mac: "52:54:00:25:60:c9", ip: "192.168.72.46"} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:18.855466  531586 main.go:141] libmachine: (newest-cni-296225) DBG | skip adding static IP to network mk-newest-cni-296225 - found existing host DHCP lease matching {name: "newest-cni-296225", mac: "52:54:00:25:60:c9", ip: "192.168.72.46"}
	I0127 13:29:18.855480  531586 main.go:141] libmachine: (newest-cni-296225) reserved static IP address 192.168.72.46 for domain newest-cni-296225
	I0127 13:29:18.855493  531586 main.go:141] libmachine: (newest-cni-296225) waiting for SSH...
	I0127 13:29:18.855509  531586 main.go:141] libmachine: (newest-cni-296225) DBG | Getting to WaitForSSH function...
	I0127 13:29:18.858091  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:18.858477  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:18.858507  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:18.858705  531586 main.go:141] libmachine: (newest-cni-296225) DBG | Using SSH client type: external
	I0127 13:29:18.858725  531586 main.go:141] libmachine: (newest-cni-296225) DBG | Using SSH private key: /home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa (-rw-------)
	I0127 13:29:18.858760  531586 main.go:141] libmachine: (newest-cni-296225) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.46 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 13:29:18.858784  531586 main.go:141] libmachine: (newest-cni-296225) DBG | About to run SSH command:
	I0127 13:29:18.858806  531586 main.go:141] libmachine: (newest-cni-296225) DBG | exit 0
	I0127 13:29:18.996896  531586 main.go:141] libmachine: (newest-cni-296225) DBG | SSH cmd err, output: <nil>: 
	I0127 13:29:18.997263  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetConfigRaw
	I0127 13:29:18.998035  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetIP
	I0127 13:29:19.001537  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.001980  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:19.002005  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.002524  531586 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225/config.json ...
	I0127 13:29:19.002778  531586 machine.go:93] provisionDockerMachine start ...
	I0127 13:29:19.002804  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:29:19.003111  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:19.006300  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.006759  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:19.006788  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.006991  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
	I0127 13:29:19.007221  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:19.007434  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:19.007600  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
	I0127 13:29:19.007802  531586 main.go:141] libmachine: Using SSH client type: native
	I0127 13:29:19.008050  531586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.46 22 <nil> <nil>}
	I0127 13:29:19.008068  531586 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 13:29:19.124549  531586 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 13:29:19.124589  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetMachineName
	I0127 13:29:19.124921  531586 buildroot.go:166] provisioning hostname "newest-cni-296225"
	I0127 13:29:19.124953  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetMachineName
	I0127 13:29:19.125168  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:19.128509  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.128870  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:19.128904  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.129136  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
	I0127 13:29:19.129338  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:19.129489  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:19.129682  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
	I0127 13:29:19.129915  531586 main.go:141] libmachine: Using SSH client type: native
	I0127 13:29:19.130181  531586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.46 22 <nil> <nil>}
	I0127 13:29:19.130202  531586 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-296225 && echo "newest-cni-296225" | sudo tee /etc/hostname
	I0127 13:29:19.274181  531586 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-296225
	
	I0127 13:29:19.274233  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:19.277975  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.278540  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:19.278575  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.278963  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
	I0127 13:29:19.279243  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:19.279514  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:19.279686  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
	I0127 13:29:19.279898  531586 main.go:141] libmachine: Using SSH client type: native
	I0127 13:29:19.280149  531586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.46 22 <nil> <nil>}
	I0127 13:29:19.280176  531586 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-296225' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-296225/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-296225' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 13:29:19.425977  531586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 13:29:19.426016  531586 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20317-466901/.minikube CaCertPath:/home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20317-466901/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20317-466901/.minikube}
	I0127 13:29:19.426066  531586 buildroot.go:174] setting up certificates
	I0127 13:29:19.426080  531586 provision.go:84] configureAuth start
	I0127 13:29:19.426092  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetMachineName
	I0127 13:29:19.426372  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetIP
	I0127 13:29:19.429756  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.430201  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:19.430230  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.430467  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:19.432982  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.433352  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:19.433381  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.433508  531586 provision.go:143] copyHostCerts
	I0127 13:29:19.433596  531586 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-466901/.minikube/ca.pem, removing ...
	I0127 13:29:19.433613  531586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-466901/.minikube/ca.pem
	I0127 13:29:19.433713  531586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20317-466901/.minikube/ca.pem (1082 bytes)
	I0127 13:29:19.433862  531586 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-466901/.minikube/cert.pem, removing ...
	I0127 13:29:19.433898  531586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-466901/.minikube/cert.pem
	I0127 13:29:19.433952  531586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20317-466901/.minikube/cert.pem (1123 bytes)
	I0127 13:29:19.434069  531586 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-466901/.minikube/key.pem, removing ...
	I0127 13:29:19.434083  531586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-466901/.minikube/key.pem
	I0127 13:29:19.434121  531586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20317-466901/.minikube/key.pem (1675 bytes)
	I0127 13:29:19.434225  531586 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20317-466901/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca-key.pem org=jenkins.newest-cni-296225 san=[127.0.0.1 192.168.72.46 localhost minikube newest-cni-296225]
	I0127 13:29:19.616134  531586 provision.go:177] copyRemoteCerts
	I0127 13:29:19.616230  531586 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 13:29:19.616268  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:19.619632  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.620115  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:19.620170  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.620627  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
	I0127 13:29:19.620882  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:19.621062  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
	I0127 13:29:19.621267  531586 sshutil.go:53] new ssh client: &{IP:192.168.72.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa Username:docker}
	I0127 13:29:19.716453  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0127 13:29:19.751558  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0127 13:29:19.787164  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 13:29:19.822729  531586 provision.go:87] duration metric: took 396.632166ms to configureAuth
	I0127 13:29:19.822766  531586 buildroot.go:189] setting minikube options for container-runtime
	I0127 13:29:19.823021  531586 config.go:182] Loaded profile config "newest-cni-296225": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 13:29:19.823035  531586 machine.go:96] duration metric: took 820.241874ms to provisionDockerMachine
	I0127 13:29:19.823044  531586 start.go:293] postStartSetup for "newest-cni-296225" (driver="kvm2")
	I0127 13:29:19.823074  531586 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 13:29:19.823125  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:29:19.823524  531586 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 13:29:19.823610  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:19.826416  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.826837  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:19.826869  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.827189  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
	I0127 13:29:19.827424  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:19.827641  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
	I0127 13:29:19.827800  531586 sshutil.go:53] new ssh client: &{IP:192.168.72.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa Username:docker}
	I0127 13:29:19.922618  531586 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 13:29:19.927700  531586 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 13:29:19.927740  531586 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-466901/.minikube/addons for local assets ...
	I0127 13:29:19.927820  531586 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-466901/.minikube/files for local assets ...
	I0127 13:29:19.927920  531586 filesync.go:149] local asset: /home/jenkins/minikube-integration/20317-466901/.minikube/files/etc/ssl/certs/4742752.pem -> 4742752.pem in /etc/ssl/certs
	I0127 13:29:19.928047  531586 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 13:29:19.940393  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/files/etc/ssl/certs/4742752.pem --> /etc/ssl/certs/4742752.pem (1708 bytes)
	I0127 13:29:19.970138  531586 start.go:296] duration metric: took 147.059526ms for postStartSetup
	I0127 13:29:19.970186  531586 fix.go:56] duration metric: took 23.687879815s for fixHost
	I0127 13:29:19.970213  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:19.973696  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.974136  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:19.974162  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:19.974433  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
	I0127 13:29:19.974671  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:19.974863  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:19.975000  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
	I0127 13:29:19.975177  531586 main.go:141] libmachine: Using SSH client type: native
	I0127 13:29:19.975406  531586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.46 22 <nil> <nil>}
	I0127 13:29:19.975421  531586 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 13:29:20.097158  531586 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737984560.051374432
	
	I0127 13:29:20.097195  531586 fix.go:216] guest clock: 1737984560.051374432
	I0127 13:29:20.097205  531586 fix.go:229] Guest: 2025-01-27 13:29:20.051374432 +0000 UTC Remote: 2025-01-27 13:29:19.970191951 +0000 UTC m=+23.842107580 (delta=81.182481ms)
	I0127 13:29:20.097251  531586 fix.go:200] guest clock delta is within tolerance: 81.182481ms
	I0127 13:29:20.097264  531586 start.go:83] releasing machines lock for "newest-cni-296225", held for 23.814976228s
	I0127 13:29:20.097302  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:29:20.097604  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetIP
	I0127 13:29:20.101191  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:20.101642  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:20.101693  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:20.101991  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:29:20.102587  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:29:20.102797  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:29:20.102930  531586 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 13:29:20.102980  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:20.103025  531586 ssh_runner.go:195] Run: cat /version.json
	I0127 13:29:20.103054  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:20.106331  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:20.106785  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:20.106843  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:20.106883  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:20.107100  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
	I0127 13:29:20.107355  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:20.107415  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:20.107456  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:20.107545  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
	I0127 13:29:20.107711  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
	I0127 13:29:20.107752  531586 sshutil.go:53] new ssh client: &{IP:192.168.72.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa Username:docker}
	I0127 13:29:20.107851  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:20.108004  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
	I0127 13:29:20.108175  531586 sshutil.go:53] new ssh client: &{IP:192.168.72.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa Username:docker}
	I0127 13:29:20.198167  531586 ssh_runner.go:195] Run: systemctl --version
	I0127 13:29:20.220547  531586 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 13:29:20.228913  531586 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 13:29:20.229009  531586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 13:29:20.252220  531586 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 13:29:20.252252  531586 start.go:495] detecting cgroup driver to use...
	I0127 13:29:20.252336  531586 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 13:29:20.290040  531586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 13:29:20.307723  531586 docker.go:217] disabling cri-docker service (if available) ...
	I0127 13:29:20.307812  531586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 13:29:20.323473  531586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 13:29:20.339833  531586 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 13:29:20.476188  531586 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 13:29:20.632180  531586 docker.go:233] disabling docker service ...
	I0127 13:29:20.632272  531586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 13:29:20.647480  531586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 13:29:20.662456  531586 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 13:29:20.849643  531586 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 13:29:21.014719  531586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 13:29:21.034260  531586 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 13:29:21.055949  531586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 13:29:21.068764  531586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 13:29:21.083524  531586 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 13:29:21.083605  531586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 13:29:21.098914  531586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 13:29:21.113664  531586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 13:29:21.127826  531586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 13:29:21.139382  531586 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 13:29:21.151342  531586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 13:29:21.162384  531586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 13:29:21.174714  531586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 13:29:21.188361  531586 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 13:29:21.201837  531586 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 13:29:21.201921  531586 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 13:29:21.216404  531586 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 13:29:21.226169  531586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:29:21.347858  531586 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 13:29:21.387449  531586 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0127 13:29:21.387582  531586 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 13:29:21.393515  531586 retry.go:31] will retry after 514.05687ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0127 13:29:21.908225  531586 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 13:29:21.917708  531586 start.go:563] Will wait 60s for crictl version
	I0127 13:29:21.917786  531586 ssh_runner.go:195] Run: which crictl
	I0127 13:29:21.923989  531586 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 13:29:21.981569  531586 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0127 13:29:21.981675  531586 ssh_runner.go:195] Run: containerd --version
	I0127 13:29:22.027649  531586 ssh_runner.go:195] Run: containerd --version
	I0127 13:29:22.060339  531586 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	I0127 13:29:22.061787  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetIP
	I0127 13:29:22.065481  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:22.065908  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:22.065946  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:22.066183  531586 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0127 13:29:22.070907  531586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:29:22.089788  531586 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0127 13:29:25.581414  529417 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 13:29:25.581498  529417 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 13:29:25.581603  529417 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 13:29:25.581744  529417 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 13:29:25.581857  529417 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 13:29:25.581911  529417 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 13:29:25.583668  529417 out.go:235]   - Generating certificates and keys ...
	I0127 13:29:25.583784  529417 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 13:29:25.583864  529417 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 13:29:25.583999  529417 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 13:29:25.584094  529417 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 13:29:25.584212  529417 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 13:29:25.584290  529417 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 13:29:25.584368  529417 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 13:29:25.584490  529417 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 13:29:25.584607  529417 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 13:29:25.584736  529417 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 13:29:25.584797  529417 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 13:29:25.584859  529417 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 13:29:25.584911  529417 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 13:29:25.584981  529417 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 13:29:25.585070  529417 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 13:29:25.585182  529417 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 13:29:25.585291  529417 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 13:29:25.585425  529417 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 13:29:25.585505  529417 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 13:29:25.587922  529417 out.go:235]   - Booting up control plane ...
	I0127 13:29:25.588008  529417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 13:29:25.588109  529417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 13:29:25.588212  529417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 13:29:25.588306  529417 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 13:29:25.588407  529417 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 13:29:25.588476  529417 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 13:29:25.588653  529417 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 13:29:25.588744  529417 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 13:29:25.588806  529417 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.424535ms
	I0127 13:29:25.588894  529417 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 13:29:25.588947  529417 kubeadm.go:310] [api-check] The API server is healthy after 6.003546574s
	I0127 13:29:25.589042  529417 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 13:29:25.589188  529417 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 13:29:25.589243  529417 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 13:29:25.589423  529417 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-325510 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 13:29:25.589477  529417 kubeadm.go:310] [bootstrap-token] Using token: pmveah.4ebz9u5xjcadsa8l
	I0127 13:29:25.590661  529417 out.go:235]   - Configuring RBAC rules ...
	I0127 13:29:25.590772  529417 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 13:29:25.590884  529417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 13:29:25.591076  529417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 13:29:25.591309  529417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 13:29:25.591477  529417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 13:29:25.591601  529417 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 13:29:25.591734  529417 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 13:29:25.591810  529417 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 13:29:25.591869  529417 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 13:29:25.591879  529417 kubeadm.go:310] 
	I0127 13:29:25.591954  529417 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 13:29:25.591974  529417 kubeadm.go:310] 
	I0127 13:29:25.592097  529417 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 13:29:25.592115  529417 kubeadm.go:310] 
	I0127 13:29:25.592151  529417 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 13:29:25.592237  529417 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 13:29:25.592327  529417 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 13:29:25.592337  529417 kubeadm.go:310] 
	I0127 13:29:25.592390  529417 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 13:29:25.592397  529417 kubeadm.go:310] 
	I0127 13:29:25.592435  529417 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 13:29:25.592439  529417 kubeadm.go:310] 
	I0127 13:29:25.592512  529417 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 13:29:25.592614  529417 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 13:29:25.592674  529417 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 13:29:25.592682  529417 kubeadm.go:310] 
	I0127 13:29:25.592801  529417 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 13:29:25.592928  529417 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 13:29:25.592941  529417 kubeadm.go:310] 
	I0127 13:29:25.593032  529417 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token pmveah.4ebz9u5xjcadsa8l \
	I0127 13:29:25.593158  529417 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:44e7ea386d1f8e7ab1336d835156dd84ecca20069390afc88f04bb1a3c629fd2 \
	I0127 13:29:25.593193  529417 kubeadm.go:310] 	--control-plane 
	I0127 13:29:25.593206  529417 kubeadm.go:310] 
	I0127 13:29:25.593328  529417 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 13:29:25.593347  529417 kubeadm.go:310] 
	I0127 13:29:25.593453  529417 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token pmveah.4ebz9u5xjcadsa8l \
	I0127 13:29:25.593643  529417 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:44e7ea386d1f8e7ab1336d835156dd84ecca20069390afc88f04bb1a3c629fd2 
	I0127 13:29:25.593663  529417 cni.go:84] Creating CNI manager for ""
	I0127 13:29:25.593674  529417 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 13:29:25.595331  529417 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 13:29:22.091203  531586 kubeadm.go:883] updating cluster {Name:newest-cni-296225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-296225 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.46 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 13:29:22.091437  531586 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 13:29:22.091524  531586 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:29:22.133513  531586 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 13:29:22.133543  531586 containerd.go:534] Images already preloaded, skipping extraction
	I0127 13:29:22.133614  531586 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:29:22.172620  531586 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 13:29:22.172654  531586 cache_images.go:84] Images are preloaded, skipping loading
	I0127 13:29:22.172666  531586 kubeadm.go:934] updating node { 192.168.72.46 8443 v1.32.1 containerd true true} ...
	I0127 13:29:22.172814  531586 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-296225 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.46
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:newest-cni-296225 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 13:29:22.172904  531586 ssh_runner.go:195] Run: sudo crictl info
	I0127 13:29:22.221421  531586 cni.go:84] Creating CNI manager for ""
	I0127 13:29:22.221446  531586 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 13:29:22.221457  531586 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0127 13:29:22.221483  531586 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.46 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-296225 NodeName:newest-cni-296225 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.46"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.46 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 13:29:22.221619  531586 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.46
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-296225"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.46"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.46"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 13:29:22.221696  531586 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 13:29:22.233206  531586 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 13:29:22.233298  531586 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 13:29:22.247498  531586 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0127 13:29:22.265563  531586 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 13:29:22.283377  531586 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I0127 13:29:22.304627  531586 ssh_runner.go:195] Run: grep 192.168.72.46	control-plane.minikube.internal$ /etc/hosts
	I0127 13:29:22.310093  531586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.46	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:29:22.328149  531586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:29:22.474894  531586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:29:22.498792  531586 certs.go:68] Setting up /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225 for IP: 192.168.72.46
	I0127 13:29:22.498819  531586 certs.go:194] generating shared ca certs ...
	I0127 13:29:22.498848  531586 certs.go:226] acquiring lock for ca certs: {Name:mk60f2aac78eb363c5e06a00675357d94c0df88d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:29:22.499080  531586 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20317-466901/.minikube/ca.key
	I0127 13:29:22.499144  531586 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20317-466901/.minikube/proxy-client-ca.key
	I0127 13:29:22.499160  531586 certs.go:256] generating profile certs ...
	I0127 13:29:22.499295  531586 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225/client.key
	I0127 13:29:22.499368  531586 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225/apiserver.key.1b824597
	I0127 13:29:22.499428  531586 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225/proxy-client.key
	I0127 13:29:22.499576  531586 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/474275.pem (1338 bytes)
	W0127 13:29:22.499617  531586 certs.go:480] ignoring /home/jenkins/minikube-integration/20317-466901/.minikube/certs/474275_empty.pem, impossibly tiny 0 bytes
	I0127 13:29:22.499632  531586 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 13:29:22.499663  531586 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem (1082 bytes)
	I0127 13:29:22.499700  531586 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/cert.pem (1123 bytes)
	I0127 13:29:22.499734  531586 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/key.pem (1675 bytes)
	I0127 13:29:22.499790  531586 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/files/etc/ssl/certs/4742752.pem (1708 bytes)
	I0127 13:29:22.500650  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 13:29:22.551481  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 13:29:22.590593  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 13:29:22.630918  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 13:29:22.660478  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0127 13:29:22.696686  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 13:29:22.724193  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 13:29:22.752949  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 13:29:22.784814  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 13:29:22.812321  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/certs/474275.pem --> /usr/share/ca-certificates/474275.pem (1338 bytes)
	I0127 13:29:22.842249  531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/files/etc/ssl/certs/4742752.pem --> /usr/share/ca-certificates/4742752.pem (1708 bytes)
	I0127 13:29:22.872391  531586 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 13:29:22.898310  531586 ssh_runner.go:195] Run: openssl version
	I0127 13:29:22.905518  531586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 13:29:22.917623  531586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:29:22.922904  531586 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 12:10 /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:29:22.922982  531586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:29:22.929666  531586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 13:29:22.941982  531586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/474275.pem && ln -fs /usr/share/ca-certificates/474275.pem /etc/ssl/certs/474275.pem"
	I0127 13:29:22.955315  531586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/474275.pem
	I0127 13:29:22.962079  531586 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 12:18 /usr/share/ca-certificates/474275.pem
	I0127 13:29:22.962157  531586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/474275.pem
	I0127 13:29:22.971599  531586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/474275.pem /etc/ssl/certs/51391683.0"
	I0127 13:29:22.985012  531586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4742752.pem && ln -fs /usr/share/ca-certificates/4742752.pem /etc/ssl/certs/4742752.pem"
	I0127 13:29:22.998788  531586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4742752.pem
	I0127 13:29:23.005232  531586 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 12:18 /usr/share/ca-certificates/4742752.pem
	I0127 13:29:23.005312  531586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4742752.pem
	I0127 13:29:23.013471  531586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4742752.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 13:29:23.028126  531586 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 13:29:23.033971  531586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 13:29:23.041089  531586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 13:29:23.048533  531586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 13:29:23.056641  531586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 13:29:23.065453  531586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 13:29:23.074452  531586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 13:29:23.083360  531586 kubeadm.go:392] StartCluster: {Name:newest-cni-296225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-296225 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.46 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: S
ubnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:29:23.083511  531586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0127 13:29:23.083604  531586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:29:23.138902  531586 cri.go:89] found id: "d766c6e246b1ecdf40d93c7468225dc4fad90ed651b6fe2c936d5fcc3d3d5170"
	I0127 13:29:23.138937  531586 cri.go:89] found id: "b555d58cf5206b0a6fd83f0168cfb804792c0d83433705e5ac60118320fece46"
	I0127 13:29:23.138941  531586 cri.go:89] found id: "ab62389a943f48c8078929c4529e496d71a5839cc3224de20672cab59cf3d31f"
	I0127 13:29:23.138945  531586 cri.go:89] found id: "e035113c19405073ac2218fc9137ccfb808c5e6b9a0a15344c76d9b3e648cf67"
	I0127 13:29:23.138947  531586 cri.go:89] found id: "7005555f3a67b3371c40da7f69569f4070f3d54977562479fd46b12e40341ee1"
	I0127 13:29:23.138952  531586 cri.go:89] found id: "5ff6602b0f8a4cd5e4cb51ca77e920e00b1c9e20d02131be56addb081e9027cc"
	I0127 13:29:23.138955  531586 cri.go:89] found id: "2614ff10025bce8287c075ab3139b2d06632fdb7cd672a7be31fbff64ffdea9b"
	I0127 13:29:23.138958  531586 cri.go:89] found id: ""
	I0127 13:29:23.139005  531586 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0127 13:29:23.161523  531586 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-27T13:29:23Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0127 13:29:23.161644  531586 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 13:29:23.177352  531586 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 13:29:23.177377  531586 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 13:29:23.177436  531586 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 13:29:23.190684  531586 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 13:29:23.191837  531586 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-296225" does not appear in /home/jenkins/minikube-integration/20317-466901/kubeconfig
	I0127 13:29:23.192568  531586 kubeconfig.go:62] /home/jenkins/minikube-integration/20317-466901/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-296225" cluster setting kubeconfig missing "newest-cni-296225" context setting]
	I0127 13:29:23.193462  531586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-466901/kubeconfig: {Name:mkc116eec378af43ea8fefe45e11af3e19be85bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:29:23.195884  531586 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 13:29:23.210992  531586 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.46
	I0127 13:29:23.211040  531586 kubeadm.go:1160] stopping kube-system containers ...
	I0127 13:29:23.211058  531586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0127 13:29:23.211141  531586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:29:23.266429  531586 cri.go:89] found id: "d766c6e246b1ecdf40d93c7468225dc4fad90ed651b6fe2c936d5fcc3d3d5170"
	I0127 13:29:23.266458  531586 cri.go:89] found id: "b555d58cf5206b0a6fd83f0168cfb804792c0d83433705e5ac60118320fece46"
	I0127 13:29:23.266464  531586 cri.go:89] found id: "ab62389a943f48c8078929c4529e496d71a5839cc3224de20672cab59cf3d31f"
	I0127 13:29:23.266468  531586 cri.go:89] found id: "e035113c19405073ac2218fc9137ccfb808c5e6b9a0a15344c76d9b3e648cf67"
	I0127 13:29:23.266472  531586 cri.go:89] found id: "7005555f3a67b3371c40da7f69569f4070f3d54977562479fd46b12e40341ee1"
	I0127 13:29:23.266477  531586 cri.go:89] found id: "5ff6602b0f8a4cd5e4cb51ca77e920e00b1c9e20d02131be56addb081e9027cc"
	I0127 13:29:23.266481  531586 cri.go:89] found id: "2614ff10025bce8287c075ab3139b2d06632fdb7cd672a7be31fbff64ffdea9b"
	I0127 13:29:23.266485  531586 cri.go:89] found id: ""
	I0127 13:29:23.266492  531586 cri.go:252] Stopping containers: [d766c6e246b1ecdf40d93c7468225dc4fad90ed651b6fe2c936d5fcc3d3d5170 b555d58cf5206b0a6fd83f0168cfb804792c0d83433705e5ac60118320fece46 ab62389a943f48c8078929c4529e496d71a5839cc3224de20672cab59cf3d31f e035113c19405073ac2218fc9137ccfb808c5e6b9a0a15344c76d9b3e648cf67 7005555f3a67b3371c40da7f69569f4070f3d54977562479fd46b12e40341ee1 5ff6602b0f8a4cd5e4cb51ca77e920e00b1c9e20d02131be56addb081e9027cc 2614ff10025bce8287c075ab3139b2d06632fdb7cd672a7be31fbff64ffdea9b]
	I0127 13:29:23.266560  531586 ssh_runner.go:195] Run: which crictl
	I0127 13:29:23.272382  531586 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 d766c6e246b1ecdf40d93c7468225dc4fad90ed651b6fe2c936d5fcc3d3d5170 b555d58cf5206b0a6fd83f0168cfb804792c0d83433705e5ac60118320fece46 ab62389a943f48c8078929c4529e496d71a5839cc3224de20672cab59cf3d31f e035113c19405073ac2218fc9137ccfb808c5e6b9a0a15344c76d9b3e648cf67 7005555f3a67b3371c40da7f69569f4070f3d54977562479fd46b12e40341ee1 5ff6602b0f8a4cd5e4cb51ca77e920e00b1c9e20d02131be56addb081e9027cc 2614ff10025bce8287c075ab3139b2d06632fdb7cd672a7be31fbff64ffdea9b
	I0127 13:29:23.324924  531586 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 13:29:23.345385  531586 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:29:23.359679  531586 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:29:23.359712  531586 kubeadm.go:157] found existing configuration files:
	
	I0127 13:29:23.359774  531586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:29:23.371542  531586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:29:23.371634  531586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:29:23.383083  531586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:29:23.393186  531586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:29:23.393267  531586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:29:23.406589  531586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:29:23.417348  531586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:29:23.417444  531586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:29:23.430008  531586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:29:23.441860  531586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:29:23.441965  531586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:29:23.452352  531586 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 13:29:23.463556  531586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:29:23.634151  531586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:29:24.791692  531586 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.15748875s)
	I0127 13:29:24.791732  531586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:29:25.027708  531586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:29:25.110706  531586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:29:25.211743  531586 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:29:25.211882  531586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:29:25.712041  531586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:29:25.596457  529417 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 13:29:25.611060  529417 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 13:29:25.631563  529417 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 13:29:25.631668  529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:25.631709  529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-325510 minikube.k8s.io/updated_at=2025_01_27T13_29_25_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=0d71ce9b1959d04f0d9fa7dbc5639a49619ad89b minikube.k8s.io/name=default-k8s-diff-port-325510 minikube.k8s.io/primary=true
	I0127 13:29:25.654141  529417 ops.go:34] apiserver oom_adj: -16
	I0127 13:29:25.885770  529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:26.386140  529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:26.885887  529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:27.386520  529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:27.886746  529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:28.386093  529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:29:28.523381  529417 kubeadm.go:1113] duration metric: took 2.89179334s to wait for elevateKubeSystemPrivileges
	I0127 13:29:28.523431  529417 kubeadm.go:394] duration metric: took 4m34.628614328s to StartCluster
	I0127 13:29:28.523462  529417 settings.go:142] acquiring lock: {Name:mk070ebf22d35da2704f00750921836dbd2cd121 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:29:28.523566  529417 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20317-466901/kubeconfig
	I0127 13:29:28.526181  529417 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-466901/kubeconfig: {Name:mkc116eec378af43ea8fefe45e11af3e19be85bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:29:28.526636  529417 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 13:29:28.526773  529417 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 13:29:28.526897  529417 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-325510"
	I0127 13:29:28.526920  529417 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-325510"
	I0127 13:29:28.526920  529417 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-325510"
	W0127 13:29:28.526930  529417 addons.go:247] addon storage-provisioner should already be in state true
	I0127 13:29:28.526943  529417 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-325510"
	I0127 13:29:28.526965  529417 host.go:66] Checking if "default-k8s-diff-port-325510" exists ...
	I0127 13:29:28.527036  529417 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-325510"
	I0127 13:29:28.527054  529417 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-325510"
	W0127 13:29:28.527061  529417 addons.go:247] addon dashboard should already be in state true
	I0127 13:29:28.527086  529417 host.go:66] Checking if "default-k8s-diff-port-325510" exists ...
	I0127 13:29:28.527083  529417 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-325510"
	I0127 13:29:28.527117  529417 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-325510"
	W0127 13:29:28.527128  529417 addons.go:247] addon metrics-server should already be in state true
	I0127 13:29:28.527164  529417 host.go:66] Checking if "default-k8s-diff-port-325510" exists ...
	I0127 13:29:28.527436  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:28.527441  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:28.526898  529417 config.go:182] Loaded profile config "default-k8s-diff-port-325510": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 13:29:28.527475  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:28.527490  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:28.527619  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:28.527655  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:28.527667  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:28.527700  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:28.528609  529417 out.go:177] * Verifying Kubernetes components...
	I0127 13:29:28.530189  529417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:29:28.546697  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35953
	I0127 13:29:28.547331  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:28.547485  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43545
	I0127 13:29:28.547528  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46213
	I0127 13:29:28.547893  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:28.548297  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:28.548482  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:29:28.548497  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:28.548832  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:28.549020  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetState
	I0127 13:29:28.549338  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:29:28.549354  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:28.549743  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35541
	I0127 13:29:28.549980  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:28.550227  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:29:28.550241  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:28.550306  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:28.550880  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:28.550926  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:28.551223  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:28.551394  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:29:28.551416  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:28.551971  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:28.552001  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:28.552189  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:28.552980  529417 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-325510"
	W0127 13:29:28.553005  529417 addons.go:247] addon default-storageclass should already be in state true
	I0127 13:29:28.553038  529417 host.go:66] Checking if "default-k8s-diff-port-325510" exists ...
	I0127 13:29:28.553380  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:28.553426  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:28.555977  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:28.556013  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:28.572312  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32967
	I0127 13:29:28.573004  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:28.573598  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:29:28.573617  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:28.573988  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:28.574040  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44851
	I0127 13:29:28.574171  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetState
	I0127 13:29:28.574508  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43399
	I0127 13:29:28.575096  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:28.575836  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:28.576253  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .DriverName
	I0127 13:29:28.576355  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:29:28.576375  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:28.577245  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:28.577419  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetState
	I0127 13:29:28.579103  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .DriverName
	I0127 13:29:28.579756  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:29:28.579779  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:28.580518  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42715
	I0127 13:29:28.580886  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:28.581173  529417 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 13:29:28.581406  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:29:28.581423  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:28.581695  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:28.581855  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetState
	I0127 13:29:28.582619  529417 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:29:28.583309  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .DriverName
	I0127 13:29:28.583662  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:28.584326  529417 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:29:28.584346  529417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 13:29:28.584368  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHHostname
	I0127 13:29:28.587322  529417 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 13:29:28.587999  529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:28.588047  529417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:28.591379  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHPort
	I0127 13:29:28.591427  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:29:28.591456  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6c", ip: ""} in network mk-default-k8s-diff-port-325510: {Iface:virbr3 ExpiryTime:2025-01-27 14:24:40 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6c Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-325510 Clientid:01:52:54:00:c4:f9:6c}
	I0127 13:29:28.591496  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined IP address 192.168.61.7 and MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:29:28.591585  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHKeyPath
	I0127 13:29:28.591752  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHUsername
	I0127 13:29:28.591911  529417 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/default-k8s-diff-port-325510/id_rsa Username:docker}
	I0127 13:29:28.592584  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 13:29:28.592601  529417 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 13:29:28.592621  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHHostname
	I0127 13:29:28.593660  529417 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 13:29:26.212209  531586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:29:26.236202  531586 api_server.go:72] duration metric: took 1.024459251s to wait for apiserver process to appear ...
	I0127 13:29:26.236238  531586 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:29:26.236266  531586 api_server.go:253] Checking apiserver healthz at https://192.168.72.46:8443/healthz ...
	I0127 13:29:26.236911  531586 api_server.go:269] stopped: https://192.168.72.46:8443/healthz: Get "https://192.168.72.46:8443/healthz": dial tcp 192.168.72.46:8443: connect: connection refused
	I0127 13:29:26.737118  531586 api_server.go:253] Checking apiserver healthz at https://192.168.72.46:8443/healthz ...
	I0127 13:29:29.390944  531586 api_server.go:279] https://192.168.72.46:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 13:29:29.390990  531586 api_server.go:103] status: https://192.168.72.46:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 13:29:29.391010  531586 api_server.go:253] Checking apiserver healthz at https://192.168.72.46:8443/healthz ...
	I0127 13:29:29.446439  531586 api_server.go:279] https://192.168.72.46:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 13:29:29.446477  531586 api_server.go:103] status: https://192.168.72.46:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 13:29:29.737006  531586 api_server.go:253] Checking apiserver healthz at https://192.168.72.46:8443/healthz ...
	I0127 13:29:29.743881  531586 api_server.go:279] https://192.168.72.46:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:29:29.743915  531586 api_server.go:103] status: https://192.168.72.46:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:29:30.237168  531586 api_server.go:253] Checking apiserver healthz at https://192.168.72.46:8443/healthz ...
	I0127 13:29:30.251557  531586 api_server.go:279] https://192.168.72.46:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:29:30.251594  531586 api_server.go:103] status: https://192.168.72.46:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:29:30.737227  531586 api_server.go:253] Checking apiserver healthz at https://192.168.72.46:8443/healthz ...
	I0127 13:29:30.744425  531586 api_server.go:279] https://192.168.72.46:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:29:30.744461  531586 api_server.go:103] status: https://192.168.72.46:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:29:31.237274  531586 api_server.go:253] Checking apiserver healthz at https://192.168.72.46:8443/healthz ...
	I0127 13:29:31.244159  531586 api_server.go:279] https://192.168.72.46:8443/healthz returned 200:
	ok
	I0127 13:29:31.252139  531586 api_server.go:141] control plane version: v1.32.1
	I0127 13:29:31.252182  531586 api_server.go:131] duration metric: took 5.015933408s to wait for apiserver health ...
	I0127 13:29:31.252194  531586 cni.go:84] Creating CNI manager for ""
	I0127 13:29:31.252203  531586 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 13:29:31.253925  531586 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 13:29:31.255434  531586 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 13:29:31.267804  531586 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 13:29:31.293560  531586 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:29:31.313542  531586 system_pods.go:59] 8 kube-system pods found
	I0127 13:29:31.313590  531586 system_pods.go:61] "coredns-668d6bf9bc-xvbfh" [0d7c4469-d90e-4487-8433-1167183525e3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 13:29:31.313601  531586 system_pods.go:61] "etcd-newest-cni-296225" [97ed55b3-82a8-4ecf-a721-26a592f2c8cd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 13:29:31.313612  531586 system_pods.go:61] "kube-apiserver-newest-cni-296225" [d31606a7-2b78-4859-80a7-35b783b0a444] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 13:29:31.313621  531586 system_pods.go:61] "kube-controller-manager-newest-cni-296225" [4d6c4da8-a13a-44c2-a877-13b9453142a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 13:29:31.313631  531586 system_pods.go:61] "kube-proxy-dzvvc" [1ec15899-c7eb-436d-9e74-fadde7ecacb8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0127 13:29:31.313640  531586 system_pods.go:61] "kube-scheduler-newest-cni-296225" [2c230f78-68ac-4abb-9cdd-5cf666793981] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 13:29:31.313655  531586 system_pods.go:61] "metrics-server-f79f97bbb-2pv7p" [1246f427-ed62-4202-8170-5ae96be7ccf5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 13:29:31.313671  531586 system_pods.go:61] "storage-provisioner" [7b83dbf7-d497-42bb-9489-614ae5ba76fa] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 13:29:31.313680  531586 system_pods.go:74] duration metric: took 20.080673ms to wait for pod list to return data ...
	I0127 13:29:31.313709  531586 node_conditions.go:102] verifying NodePressure condition ...
	I0127 13:29:31.321205  531586 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 13:29:31.321236  531586 node_conditions.go:123] node cpu capacity is 2
	I0127 13:29:31.321251  531586 node_conditions.go:105] duration metric: took 7.532371ms to run NodePressure ...
	I0127 13:29:31.321276  531586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:29:31.758136  531586 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 13:29:31.783447  531586 ops.go:34] apiserver oom_adj: -16
	I0127 13:29:31.783539  531586 kubeadm.go:597] duration metric: took 8.606153189s to restartPrimaryControlPlane
	I0127 13:29:31.783582  531586 kubeadm.go:394] duration metric: took 8.700235213s to StartCluster
	I0127 13:29:31.783614  531586 settings.go:142] acquiring lock: {Name:mk070ebf22d35da2704f00750921836dbd2cd121 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:29:31.783739  531586 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20317-466901/kubeconfig
	I0127 13:29:31.786536  531586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-466901/kubeconfig: {Name:mkc116eec378af43ea8fefe45e11af3e19be85bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:29:31.786926  531586 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.46 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 13:29:31.787022  531586 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 13:29:31.787188  531586 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-296225"
	I0127 13:29:31.787308  531586 config.go:182] Loaded profile config "newest-cni-296225": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 13:29:31.787320  531586 addons.go:69] Setting metrics-server=true in profile "newest-cni-296225"
	I0127 13:29:31.787353  531586 addons.go:238] Setting addon metrics-server=true in "newest-cni-296225"
	W0127 13:29:31.787367  531586 addons.go:247] addon metrics-server should already be in state true
	I0127 13:29:31.787318  531586 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-296225"
	W0127 13:29:31.787388  531586 addons.go:247] addon storage-provisioner should already be in state true
	I0127 13:29:31.787413  531586 host.go:66] Checking if "newest-cni-296225" exists ...
	I0127 13:29:31.787446  531586 host.go:66] Checking if "newest-cni-296225" exists ...
	I0127 13:29:31.787286  531586 addons.go:69] Setting dashboard=true in profile "newest-cni-296225"
	I0127 13:29:31.787526  531586 addons.go:238] Setting addon dashboard=true in "newest-cni-296225"
	W0127 13:29:31.787557  531586 addons.go:247] addon dashboard should already be in state true
	I0127 13:29:31.787597  531586 host.go:66] Checking if "newest-cni-296225" exists ...
	I0127 13:29:31.787246  531586 addons.go:69] Setting default-storageclass=true in profile "newest-cni-296225"
	I0127 13:29:31.787654  531586 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-296225"
	I0127 13:29:31.787886  531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:31.787922  531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:31.787946  531586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:31.787971  531586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:31.788040  531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:31.788067  531586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:31.788279  531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:31.788348  531586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:31.791198  531586 out.go:177] * Verifying Kubernetes components...
	I0127 13:29:31.792729  531586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:29:31.809862  531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43917
	I0127 13:29:31.810576  531586 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:31.810735  531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43481
	I0127 13:29:31.811453  531586 main.go:141] libmachine: Using API Version  1
	I0127 13:29:31.811479  531586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:31.811565  531586 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:31.812009  531586 main.go:141] libmachine: Using API Version  1
	I0127 13:29:31.812033  531586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:31.812507  531586 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:31.814254  531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43261
	I0127 13:29:31.814774  531586 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:31.815750  531586 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:31.816710  531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:31.816754  531586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:31.817133  531586 main.go:141] libmachine: Using API Version  1
	I0127 13:29:31.817157  531586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:31.817572  531586 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:31.818143  531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:31.818200  531586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:31.819519  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetState
	I0127 13:29:31.824362  531586 addons.go:238] Setting addon default-storageclass=true in "newest-cni-296225"
	W0127 13:29:31.824386  531586 addons.go:247] addon default-storageclass should already be in state true
	I0127 13:29:31.824421  531586 host.go:66] Checking if "newest-cni-296225" exists ...
	I0127 13:29:31.824804  531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:31.824849  531586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:31.835403  531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33125
	I0127 13:29:31.836274  531586 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:31.836962  531586 main.go:141] libmachine: Using API Version  1
	I0127 13:29:31.836997  531586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:31.837484  531586 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:31.838061  531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:31.838106  531586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:31.839703  531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37671
	I0127 13:29:31.844903  531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36575
	I0127 13:29:31.850434  531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38467
	I0127 13:29:31.864579  531586 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:31.864731  531586 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:31.864805  531586 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:31.865332  531586 main.go:141] libmachine: Using API Version  1
	I0127 13:29:31.865353  531586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:31.865507  531586 main.go:141] libmachine: Using API Version  1
	I0127 13:29:31.865520  531586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:31.865755  531586 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:31.865888  531586 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:31.866153  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetState
	I0127 13:29:31.866263  531586 main.go:141] libmachine: Using API Version  1
	I0127 13:29:31.866280  531586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:31.866349  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetState
	I0127 13:29:31.866765  531586 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:31.867410  531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:29:31.867459  531586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:29:31.869030  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:29:31.870746  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:29:31.871229  531586 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:29:31.872679  531586 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 13:29:31.872852  531586 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:29:31.872877  531586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 13:29:31.872899  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:31.874840  531586 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 13:29:31.874867  531586 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 13:29:31.874889  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:31.879359  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:31.879992  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:31.880845  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:31.880876  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:31.880911  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:31.880935  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:31.881182  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
	I0127 13:29:31.881276  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
	I0127 13:29:31.881374  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:31.881423  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:31.881494  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
	I0127 13:29:31.881545  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
	I0127 13:29:31.881692  531586 sshutil.go:53] new ssh client: &{IP:192.168.72.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa Username:docker}
	I0127 13:29:31.881713  531586 sshutil.go:53] new ssh client: &{IP:192.168.72.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa Username:docker}
	I0127 13:29:31.890590  531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42247
	I0127 13:29:31.891311  531586 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:31.891961  531586 main.go:141] libmachine: Using API Version  1
	I0127 13:29:31.891983  531586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:31.892382  531586 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:31.892632  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetState
	I0127 13:29:31.894810  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:29:31.895223  531586 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 13:29:31.895240  531586 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 13:29:31.895450  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:31.895697  531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43877
	I0127 13:29:31.896698  531586 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:31.897633  531586 main.go:141] libmachine: Using API Version  1
	I0127 13:29:31.897658  531586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:31.898129  531586 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:31.898280  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetState
	I0127 13:29:31.899110  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:31.899759  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:31.899782  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:31.899962  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
	I0127 13:29:31.900155  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:31.900337  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
	I0127 13:29:31.900466  531586 sshutil.go:53] new ssh client: &{IP:192.168.72.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa Username:docker}
	I0127 13:29:31.904472  531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
	I0127 13:29:31.907054  531586 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 13:29:31.908332  531586 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 13:29:28.595128  529417 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 13:29:28.595147  529417 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 13:29:28.595179  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHHostname
	I0127 13:29:28.596235  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:29:28.597222  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6c", ip: ""} in network mk-default-k8s-diff-port-325510: {Iface:virbr3 ExpiryTime:2025-01-27 14:24:40 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6c Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-325510 Clientid:01:52:54:00:c4:f9:6c}
	I0127 13:29:28.597304  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined IP address 192.168.61.7 and MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:29:28.597628  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHPort
	I0127 13:29:28.597788  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHKeyPath
	I0127 13:29:28.597943  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHUsername
	I0127 13:29:28.598078  529417 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/default-k8s-diff-port-325510/id_rsa Username:docker}
	I0127 13:29:28.599130  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:29:28.599670  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6c", ip: ""} in network mk-default-k8s-diff-port-325510: {Iface:virbr3 ExpiryTime:2025-01-27 14:24:40 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6c Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-325510 Clientid:01:52:54:00:c4:f9:6c}
	I0127 13:29:28.599694  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined IP address 192.168.61.7 and MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:29:28.599880  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHPort
	I0127 13:29:28.600049  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHKeyPath
	I0127 13:29:28.600195  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHUsername
	I0127 13:29:28.600327  529417 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/default-k8s-diff-port-325510/id_rsa Username:docker}
	I0127 13:29:28.610825  529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39231
	I0127 13:29:28.611379  529417 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:29:28.611919  529417 main.go:141] libmachine: Using API Version  1
	I0127 13:29:28.611939  529417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:29:28.612288  529417 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:29:28.612480  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetState
	I0127 13:29:28.614326  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .DriverName
	I0127 13:29:28.614636  529417 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 13:29:28.614668  529417 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 13:29:28.614688  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHHostname
	I0127 13:29:28.618088  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:29:28.618805  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6c", ip: ""} in network mk-default-k8s-diff-port-325510: {Iface:virbr3 ExpiryTime:2025-01-27 14:24:40 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6c Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-325510 Clientid:01:52:54:00:c4:f9:6c}
	I0127 13:29:28.618958  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined IP address 192.168.61.7 and MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
	I0127 13:29:28.619294  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHPort
	I0127 13:29:28.619517  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHKeyPath
	I0127 13:29:28.619738  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHUsername
	I0127 13:29:28.619953  529417 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/default-k8s-diff-port-325510/id_rsa Username:docker}
	I0127 13:29:28.750007  529417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:29:28.770798  529417 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-325510" to be "Ready" ...
	I0127 13:29:28.794753  529417 node_ready.go:49] node "default-k8s-diff-port-325510" has status "Ready":"True"
	I0127 13:29:28.794783  529417 node_ready.go:38] duration metric: took 23.945006ms for node "default-k8s-diff-port-325510" to be "Ready" ...
	I0127 13:29:28.794796  529417 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:29:28.801618  529417 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:28.841055  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 13:29:28.841089  529417 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 13:29:28.865445  529417 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 13:29:28.865479  529417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 13:29:28.870120  529417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:29:28.887649  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 13:29:28.887691  529417 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 13:29:28.908488  529417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 13:29:28.926717  529417 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 13:29:28.926752  529417 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 13:29:28.949234  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 13:29:28.949269  529417 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 13:29:28.983403  529417 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:29:28.983438  529417 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 13:29:29.010532  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 13:29:29.010567  529417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 13:29:29.085215  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 13:29:29.085250  529417 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 13:29:29.085479  529417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:29:29.180902  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 13:29:29.180935  529417 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 13:29:29.239792  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 13:29:29.239830  529417 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 13:29:29.350534  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 13:29:29.350566  529417 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 13:29:29.463271  529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:29:29.463315  529417 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 13:29:29.551176  529417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:29:30.055621  529417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.147081618s)
	I0127 13:29:30.055704  529417 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:30.055723  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
	I0127 13:29:30.056191  529417 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:30.056215  529417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:30.056226  529417 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:30.056255  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | Closing plugin on server side
	I0127 13:29:30.056323  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
	I0127 13:29:30.056341  529417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.18618522s)
	I0127 13:29:30.056436  529417 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:30.056465  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
	I0127 13:29:30.056627  529417 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:30.056649  529417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:30.056963  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | Closing plugin on server side
	I0127 13:29:30.058774  529417 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:30.058792  529417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:30.058808  529417 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:30.058817  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
	I0127 13:29:30.059068  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | Closing plugin on server side
	I0127 13:29:30.059083  529417 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:30.059098  529417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:30.083977  529417 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:30.084003  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
	I0127 13:29:30.084571  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | Closing plugin on server side
	I0127 13:29:30.084583  529417 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:30.084595  529417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:30.830919  529417 pod_ready.go:103] pod "etcd-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:30.961132  529417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.875594685s)
	I0127 13:29:30.961202  529417 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:30.961219  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
	I0127 13:29:30.963600  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | Closing plugin on server side
	I0127 13:29:30.963608  529417 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:30.963645  529417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:30.963654  529417 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:30.963662  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
	I0127 13:29:30.964368  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | Closing plugin on server side
	I0127 13:29:30.964392  529417 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:30.964451  529417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:30.964463  529417 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-325510"
	I0127 13:29:32.478187  529417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.926948394s)
	I0127 13:29:32.478257  529417 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:32.478272  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
	I0127 13:29:32.478650  529417 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:32.478671  529417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:32.478683  529417 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:32.478693  529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
	I0127 13:29:32.479015  529417 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:32.479033  529417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:32.482147  529417 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-325510 addons enable metrics-server
	
	I0127 13:29:32.483736  529417 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 13:29:32.484840  529417 addons.go:514] duration metric: took 3.958103252s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 13:29:31.909581  531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 13:29:31.909609  531586 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 13:29:31.909639  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
	I0127 13:29:31.913216  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:31.913664  531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
	I0127 13:29:31.913695  531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
	I0127 13:29:31.913996  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
	I0127 13:29:31.914211  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
	I0127 13:29:31.914377  531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
	I0127 13:29:31.914514  531586 sshutil.go:53] new ssh client: &{IP:192.168.72.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa Username:docker}
	I0127 13:29:32.089563  531586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:29:32.127765  531586 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:29:32.127896  531586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:29:32.149480  531586 api_server.go:72] duration metric: took 362.501205ms to wait for apiserver process to appear ...
	I0127 13:29:32.149531  531586 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:29:32.149576  531586 api_server.go:253] Checking apiserver healthz at https://192.168.72.46:8443/healthz ...
	I0127 13:29:32.170573  531586 api_server.go:279] https://192.168.72.46:8443/healthz returned 200:
	ok
	I0127 13:29:32.171739  531586 api_server.go:141] control plane version: v1.32.1
	I0127 13:29:32.171771  531586 api_server.go:131] duration metric: took 22.230634ms to wait for apiserver health ...
	I0127 13:29:32.171784  531586 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:29:32.186307  531586 system_pods.go:59] 8 kube-system pods found
	I0127 13:29:32.186342  531586 system_pods.go:61] "coredns-668d6bf9bc-xvbfh" [0d7c4469-d90e-4487-8433-1167183525e3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 13:29:32.186349  531586 system_pods.go:61] "etcd-newest-cni-296225" [97ed55b3-82a8-4ecf-a721-26a592f2c8cd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 13:29:32.186360  531586 system_pods.go:61] "kube-apiserver-newest-cni-296225" [d31606a7-2b78-4859-80a7-35b783b0a444] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 13:29:32.186368  531586 system_pods.go:61] "kube-controller-manager-newest-cni-296225" [4d6c4da8-a13a-44c2-a877-13b9453142a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 13:29:32.186373  531586 system_pods.go:61] "kube-proxy-dzvvc" [1ec15899-c7eb-436d-9e74-fadde7ecacb8] Running
	I0127 13:29:32.186380  531586 system_pods.go:61] "kube-scheduler-newest-cni-296225" [2c230f78-68ac-4abb-9cdd-5cf666793981] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 13:29:32.186388  531586 system_pods.go:61] "metrics-server-f79f97bbb-2pv7p" [1246f427-ed62-4202-8170-5ae96be7ccf5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 13:29:32.186393  531586 system_pods.go:61] "storage-provisioner" [7b83dbf7-d497-42bb-9489-614ae5ba76fa] Running
	I0127 13:29:32.186408  531586 system_pods.go:74] duration metric: took 14.616708ms to wait for pod list to return data ...
	I0127 13:29:32.186420  531586 default_sa.go:34] waiting for default service account to be created ...
	I0127 13:29:32.194387  531586 default_sa.go:45] found service account: "default"
	I0127 13:29:32.194429  531586 default_sa.go:55] duration metric: took 7.999321ms for default service account to be created ...
	I0127 13:29:32.194447  531586 kubeadm.go:582] duration metric: took 407.475818ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 13:29:32.194469  531586 node_conditions.go:102] verifying NodePressure condition ...
	I0127 13:29:32.215128  531586 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 13:29:32.215228  531586 node_conditions.go:123] node cpu capacity is 2
	I0127 13:29:32.215257  531586 node_conditions.go:105] duration metric: took 20.782574ms to run NodePressure ...
	I0127 13:29:32.215325  531586 start.go:241] waiting for startup goroutines ...
	I0127 13:29:32.224708  531586 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 13:29:32.224738  531586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 13:29:32.233504  531586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 13:29:32.295258  531586 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 13:29:32.295311  531586 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 13:29:32.340500  531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 13:29:32.340623  531586 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 13:29:32.552816  531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 13:29:32.552969  531586 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 13:29:32.615247  531586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:29:32.615684  531586 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:29:32.615709  531586 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 13:29:32.772893  531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 13:29:32.772938  531586 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 13:29:32.831244  531586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:29:32.939523  531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 13:29:32.939558  531586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 13:29:33.121982  531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 13:29:33.122026  531586 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 13:29:33.248581  531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 13:29:33.248619  531586 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 13:29:33.339337  531586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.105786367s)
	I0127 13:29:33.339401  531586 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:33.339413  531586 main.go:141] libmachine: (newest-cni-296225) Calling .Close
	I0127 13:29:33.341380  531586 main.go:141] libmachine: (newest-cni-296225) DBG | Closing plugin on server side
	I0127 13:29:33.341463  531586 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:33.341484  531586 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:33.341498  531586 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:33.341511  531586 main.go:141] libmachine: (newest-cni-296225) Calling .Close
	I0127 13:29:33.342973  531586 main.go:141] libmachine: (newest-cni-296225) DBG | Closing plugin on server side
	I0127 13:29:33.342984  531586 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:33.342995  531586 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:33.350366  531586 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:33.350388  531586 main.go:141] libmachine: (newest-cni-296225) Calling .Close
	I0127 13:29:33.350671  531586 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:33.350685  531586 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:33.367462  531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 13:29:33.367490  531586 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 13:29:33.428952  531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 13:29:33.428989  531586 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 13:29:33.512094  531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:29:33.512127  531586 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 13:29:33.585612  531586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:29:34.628686  531586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.013367863s)
	I0127 13:29:34.628749  531586 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:34.628761  531586 main.go:141] libmachine: (newest-cni-296225) Calling .Close
	I0127 13:29:34.629106  531586 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:34.629133  531586 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:34.629143  531586 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:34.629153  531586 main.go:141] libmachine: (newest-cni-296225) Calling .Close
	I0127 13:29:34.629394  531586 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:34.629407  531586 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:34.834013  531586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.002708663s)
	I0127 13:29:34.834087  531586 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:34.834105  531586 main.go:141] libmachine: (newest-cni-296225) Calling .Close
	I0127 13:29:34.834399  531586 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:34.834418  531586 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:34.834427  531586 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:34.834435  531586 main.go:141] libmachine: (newest-cni-296225) Calling .Close
	I0127 13:29:34.834714  531586 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:34.834733  531586 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:34.834746  531586 addons.go:479] Verifying addon metrics-server=true in "newest-cni-296225"
	I0127 13:29:35.573250  531586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.987594335s)
	I0127 13:29:35.573316  531586 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:35.573332  531586 main.go:141] libmachine: (newest-cni-296225) Calling .Close
	I0127 13:29:35.573696  531586 main.go:141] libmachine: (newest-cni-296225) DBG | Closing plugin on server side
	I0127 13:29:35.573748  531586 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:35.573762  531586 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:35.573820  531586 main.go:141] libmachine: Making call to close driver server
	I0127 13:29:35.573835  531586 main.go:141] libmachine: (newest-cni-296225) Calling .Close
	I0127 13:29:35.574254  531586 main.go:141] libmachine: (newest-cni-296225) DBG | Closing plugin on server side
	I0127 13:29:35.575985  531586 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:29:35.576005  531586 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:29:35.577914  531586 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-296225 addons enable metrics-server
	
	I0127 13:29:35.579611  531586 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0127 13:29:35.580983  531586 addons.go:514] duration metric: took 3.79397273s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0127 13:29:35.581031  531586 start.go:246] waiting for cluster config update ...
	I0127 13:29:35.581050  531586 start.go:255] writing updated cluster config ...
	I0127 13:29:35.581368  531586 ssh_runner.go:195] Run: rm -f paused
	I0127 13:29:35.638909  531586 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 13:29:35.640552  531586 out.go:177] * Done! kubectl is now configured to use "newest-cni-296225" cluster and "default" namespace by default
	I0127 13:29:33.314653  529417 pod_ready.go:103] pod "etcd-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:34.308087  529417 pod_ready.go:93] pod "etcd-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"True"
	I0127 13:29:34.308114  529417 pod_ready.go:82] duration metric: took 5.506466228s for pod "etcd-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:34.308126  529417 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:34.314009  529417 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"True"
	I0127 13:29:34.314033  529417 pod_ready.go:82] duration metric: took 5.900062ms for pod "kube-apiserver-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:34.314044  529417 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:34.321801  529417 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"True"
	I0127 13:29:34.321823  529417 pod_ready.go:82] duration metric: took 7.77255ms for pod "kube-controller-manager-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:34.321836  529417 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:36.328661  529417 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:38.833405  529417 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:39.331942  529417 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"True"
	I0127 13:29:39.331971  529417 pod_ready.go:82] duration metric: took 5.010119744s for pod "kube-scheduler-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:39.331983  529417 pod_ready.go:39] duration metric: took 10.537174991s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:29:39.332004  529417 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:29:39.332061  529417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:29:39.364826  529417 api_server.go:72] duration metric: took 10.838138782s to wait for apiserver process to appear ...
	I0127 13:29:39.364856  529417 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:29:39.364880  529417 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0127 13:29:39.395339  529417 api_server.go:279] https://192.168.61.7:8444/healthz returned 200:
	ok
	I0127 13:29:39.403463  529417 api_server.go:141] control plane version: v1.32.1
	I0127 13:29:39.403502  529417 api_server.go:131] duration metric: took 38.63787ms to wait for apiserver health ...
	I0127 13:29:39.403515  529417 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:29:39.428974  529417 system_pods.go:59] 9 kube-system pods found
	I0127 13:29:39.429008  529417 system_pods.go:61] "coredns-668d6bf9bc-mgxmm" [15f65844-c002-4253-9f43-609e6d3d86c0] Running
	I0127 13:29:39.429013  529417 system_pods.go:61] "coredns-668d6bf9bc-rlvv2" [b116f02c-d30f-4869-bef1-55722f0f1a58] Running
	I0127 13:29:39.429016  529417 system_pods.go:61] "etcd-default-k8s-diff-port-325510" [88fd4825-b74c-43e0-8a3e-fd60bb409b76] Running
	I0127 13:29:39.429021  529417 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-325510" [4eeff905-b36f-4be8-ac24-77c8421495c4] Running
	I0127 13:29:39.429024  529417 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-325510" [07956b85-b521-44cc-be77-675703803a17] Running
	I0127 13:29:39.429027  529417 system_pods.go:61] "kube-proxy-gb24h" [d0d50b9f-b02f-49dd-9a7a-78e202ce247a] Running
	I0127 13:29:39.429031  529417 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-325510" [a7c2c0c5-c386-454d-9542-852b02901060] Running
	I0127 13:29:39.429037  529417 system_pods.go:61] "metrics-server-f79f97bbb-vtvnn" [07e0c335-6a2b-4ef3-b153-3689cdb7ccaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 13:29:39.429041  529417 system_pods.go:61] "storage-provisioner" [7b76ca76-2bfc-44c4-bfc3-5ac3f4cde72b] Running
	I0127 13:29:39.429048  529417 system_pods.go:74] duration metric: took 25.526569ms to wait for pod list to return data ...
	I0127 13:29:39.429056  529417 default_sa.go:34] waiting for default service account to be created ...
	I0127 13:29:39.449041  529417 default_sa.go:45] found service account: "default"
	I0127 13:29:39.449083  529417 default_sa.go:55] duration metric: took 20.019081ms for default service account to be created ...
	I0127 13:29:39.449098  529417 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 13:29:39.468326  529417 system_pods.go:87] 9 kube-system pods found
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	c275df95d6ccd       523cad1a4df73       46 seconds ago      Exited              dashboard-metrics-scraper   9                   65be5b0919164       dashboard-metrics-scraper-86c6bf9756-pcpvf
	ca1237bce7202       07655ddf2eebe       21 minutes ago      Running             kubernetes-dashboard        0                   1666f38203943       kubernetes-dashboard-7779f9b69b-qc62j
	a288c6acdbe35       6e38f40d628db       22 minutes ago      Running             storage-provisioner         0                   2ef5a51293d77       storage-provisioner
	6b715605dc9c1       c69fa2e9cbf5f       22 minutes ago      Running             coredns                     0                   38f55bb37b315       coredns-668d6bf9bc-rlvv2
	03194d52a3cbc       c69fa2e9cbf5f       22 minutes ago      Running             coredns                     0                   470a549455a7a       coredns-668d6bf9bc-mgxmm
	bdff7f10e4adf       e29f9c7391fd9       22 minutes ago      Running             kube-proxy                  0                   791a8b77234d3       kube-proxy-gb24h
	c1f4ca8d06fb9       95c0bda56fc4d       22 minutes ago      Running             kube-apiserver              2                   c41a2b3fa04e8       kube-apiserver-default-k8s-diff-port-325510
	a3f76d46e8a9b       2b0d6572d062c       22 minutes ago      Running             kube-scheduler              2                   f1e35d49af2eb       kube-scheduler-default-k8s-diff-port-325510
	fd6c891095b90       019ee182b58e2       22 minutes ago      Running             kube-controller-manager     2                   9b2d59408fba4       kube-controller-manager-default-k8s-diff-port-325510
	3aa264a747625       a9e7e6b294baf       22 minutes ago      Running             etcd                        2                   73b4376f52649       etcd-default-k8s-diff-port-325510
	
	
	==> containerd <==
	Jan 27 13:45:20 default-k8s-diff-port-325510 containerd[560]: time="2025-01-27T13:45:20.055396475Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Jan 27 13:45:20 default-k8s-diff-port-325510 containerd[560]: time="2025-01-27T13:45:20.057859869Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 27 13:45:20 default-k8s-diff-port-325510 containerd[560]: time="2025-01-27T13:45:20.058023365Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Jan 27 13:45:36 default-k8s-diff-port-325510 containerd[560]: time="2025-01-27T13:45:36.045706206Z" level=info msg="CreateContainer within sandbox \"65be5b091916453fa3b139851e5cf378da696bd5a6afb2bf0dc9044bf9212b52\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,}"
	Jan 27 13:45:36 default-k8s-diff-port-325510 containerd[560]: time="2025-01-27T13:45:36.072129028Z" level=info msg="CreateContainer within sandbox \"65be5b091916453fa3b139851e5cf378da696bd5a6afb2bf0dc9044bf9212b52\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,} returns container id \"5aece725a367e81ad280d1c1650e71a5948a1028994b12d259c6b9175c4994e3\""
	Jan 27 13:45:36 default-k8s-diff-port-325510 containerd[560]: time="2025-01-27T13:45:36.073304432Z" level=info msg="StartContainer for \"5aece725a367e81ad280d1c1650e71a5948a1028994b12d259c6b9175c4994e3\""
	Jan 27 13:45:36 default-k8s-diff-port-325510 containerd[560]: time="2025-01-27T13:45:36.170457457Z" level=info msg="StartContainer for \"5aece725a367e81ad280d1c1650e71a5948a1028994b12d259c6b9175c4994e3\" returns successfully"
	Jan 27 13:45:36 default-k8s-diff-port-325510 containerd[560]: time="2025-01-27T13:45:36.213768238Z" level=info msg="shim disconnected" id=5aece725a367e81ad280d1c1650e71a5948a1028994b12d259c6b9175c4994e3 namespace=k8s.io
	Jan 27 13:45:36 default-k8s-diff-port-325510 containerd[560]: time="2025-01-27T13:45:36.214246742Z" level=warning msg="cleaning up after shim disconnected" id=5aece725a367e81ad280d1c1650e71a5948a1028994b12d259c6b9175c4994e3 namespace=k8s.io
	Jan 27 13:45:36 default-k8s-diff-port-325510 containerd[560]: time="2025-01-27T13:45:36.214532400Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 27 13:45:36 default-k8s-diff-port-325510 containerd[560]: time="2025-01-27T13:45:36.886065461Z" level=info msg="RemoveContainer for \"b7dd62e665efd4a7e2dac3ad5e4cb2ae8a29cd1c0d323c7b15d4e0c4675a2450\""
	Jan 27 13:45:36 default-k8s-diff-port-325510 containerd[560]: time="2025-01-27T13:45:36.895638458Z" level=info msg="RemoveContainer for \"b7dd62e665efd4a7e2dac3ad5e4cb2ae8a29cd1c0d323c7b15d4e0c4675a2450\" returns successfully"
	Jan 27 13:50:22 default-k8s-diff-port-325510 containerd[560]: time="2025-01-27T13:50:22.044529476Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 13:50:22 default-k8s-diff-port-325510 containerd[560]: time="2025-01-27T13:50:22.053331162Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Jan 27 13:50:22 default-k8s-diff-port-325510 containerd[560]: time="2025-01-27T13:50:22.055669060Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Jan 27 13:50:22 default-k8s-diff-port-325510 containerd[560]: time="2025-01-27T13:50:22.055733774Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 27 13:50:48 default-k8s-diff-port-325510 containerd[560]: time="2025-01-27T13:50:48.045839969Z" level=info msg="CreateContainer within sandbox \"65be5b091916453fa3b139851e5cf378da696bd5a6afb2bf0dc9044bf9212b52\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,}"
	Jan 27 13:50:48 default-k8s-diff-port-325510 containerd[560]: time="2025-01-27T13:50:48.076906982Z" level=info msg="CreateContainer within sandbox \"65be5b091916453fa3b139851e5cf378da696bd5a6afb2bf0dc9044bf9212b52\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,} returns container id \"c275df95d6ccd09494fdeed899af06b41d5121899be825922a0bb91e6f927602\""
	Jan 27 13:50:48 default-k8s-diff-port-325510 containerd[560]: time="2025-01-27T13:50:48.078817393Z" level=info msg="StartContainer for \"c275df95d6ccd09494fdeed899af06b41d5121899be825922a0bb91e6f927602\""
	Jan 27 13:50:48 default-k8s-diff-port-325510 containerd[560]: time="2025-01-27T13:50:48.166294228Z" level=info msg="StartContainer for \"c275df95d6ccd09494fdeed899af06b41d5121899be825922a0bb91e6f927602\" returns successfully"
	Jan 27 13:50:48 default-k8s-diff-port-325510 containerd[560]: time="2025-01-27T13:50:48.211501039Z" level=info msg="shim disconnected" id=c275df95d6ccd09494fdeed899af06b41d5121899be825922a0bb91e6f927602 namespace=k8s.io
	Jan 27 13:50:48 default-k8s-diff-port-325510 containerd[560]: time="2025-01-27T13:50:48.211648423Z" level=warning msg="cleaning up after shim disconnected" id=c275df95d6ccd09494fdeed899af06b41d5121899be825922a0bb91e6f927602 namespace=k8s.io
	Jan 27 13:50:48 default-k8s-diff-port-325510 containerd[560]: time="2025-01-27T13:50:48.211660898Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 27 13:50:48 default-k8s-diff-port-325510 containerd[560]: time="2025-01-27T13:50:48.673914216Z" level=info msg="RemoveContainer for \"5aece725a367e81ad280d1c1650e71a5948a1028994b12d259c6b9175c4994e3\""
	Jan 27 13:50:48 default-k8s-diff-port-325510 containerd[560]: time="2025-01-27T13:50:48.683169385Z" level=info msg="RemoveContainer for \"5aece725a367e81ad280d1c1650e71a5948a1028994b12d259c6b9175c4994e3\" returns successfully"
	
	
	==> coredns [03194d52a3cbc520d241ae80c977d31ac5ab18ec353cf415d70c2d33971bf71e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [6b715605dc9c1f52bac2db40602fa0e905660d02a8afb1cf3cab73dad6f12fc7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-325510
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-325510
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0d71ce9b1959d04f0d9fa7dbc5639a49619ad89b
	                    minikube.k8s.io/name=default-k8s-diff-port-325510
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T13_29_25_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 13:29:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-325510
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 13:51:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 13:48:38 +0000   Mon, 27 Jan 2025 13:29:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 13:48:38 +0000   Mon, 27 Jan 2025 13:29:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 13:48:38 +0000   Mon, 27 Jan 2025 13:29:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 13:48:38 +0000   Mon, 27 Jan 2025 13:29:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.7
	  Hostname:    default-k8s-diff-port-325510
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4a7f123dab9d42cebc886b31842c3c6f
	  System UUID:                4a7f123d-ab9d-42ce-bc88-6b31842c3c6f
	  Boot ID:                    868060a3-194b-4d74-b80f-a1f20a3e0bf0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-mgxmm                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     22m
	  kube-system                 coredns-668d6bf9bc-rlvv2                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     22m
	  kube-system                 etcd-default-k8s-diff-port-325510                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         22m
	  kube-system                 kube-apiserver-default-k8s-diff-port-325510             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-325510    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-gb24h                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-default-k8s-diff-port-325510             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 metrics-server-f79f97bbb-vtvnn                          100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         22m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-pcpvf              0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-qc62j                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-325510 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-325510 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node default-k8s-diff-port-325510 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node default-k8s-diff-port-325510 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node default-k8s-diff-port-325510 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m                kubelet          Node default-k8s-diff-port-325510 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           22m                node-controller  Node default-k8s-diff-port-325510 event: Registered Node default-k8s-diff-port-325510 in Controller
	
	
	==> dmesg <==
	[  +0.047195] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.109333] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.922337] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.506805] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.634291] systemd-fstab-generator[484]: Ignoring "noauto" option for root device
	[  +0.064416] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.079452] systemd-fstab-generator[496]: Ignoring "noauto" option for root device
	[  +0.195129] systemd-fstab-generator[510]: Ignoring "noauto" option for root device
	[  +0.178357] systemd-fstab-generator[522]: Ignoring "noauto" option for root device
	[  +0.348305] systemd-fstab-generator[552]: Ignoring "noauto" option for root device
	[  +2.012344] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +2.135620] systemd-fstab-generator[757]: Ignoring "noauto" option for root device
	[  +0.062106] kauditd_printk_skb: 186 callbacks suppressed
	[  +5.607848] kauditd_printk_skb: 69 callbacks suppressed
	[Jan27 13:25] kauditd_printk_skb: 86 callbacks suppressed
	[ +19.635262] kauditd_printk_skb: 12 callbacks suppressed
	[Jan27 13:29] systemd-fstab-generator[3156]: Ignoring "noauto" option for root device
	[  +7.075667] systemd-fstab-generator[3535]: Ignoring "noauto" option for root device
	[  +0.091270] kauditd_printk_skb: 87 callbacks suppressed
	[  +3.913551] systemd-fstab-generator[3624]: Ignoring "noauto" option for root device
	[  +1.489262] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.433261] kauditd_printk_skb: 86 callbacks suppressed
	[Jan27 13:30] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [3aa264a7476251b4c767df86bdb786974964e09d1c2093ac034ac459295d1838] <==
	{"level":"info","ts":"2025-01-27T13:29:19.996992Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"226ef5c5e7cc232","local-member-attributes":"{Name:default-k8s-diff-port-325510 ClientURLs:[https://192.168.61.7:2379]}","request-path":"/0/members/226ef5c5e7cc232/attributes","cluster-id":"25303c38baa89c47","publish-timeout":"7s"}
	{"level":"info","ts":"2025-01-27T13:29:19.997062Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T13:29:19.997956Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T13:29:19.998379Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T13:29:20.002006Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T13:29:20.003190Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.7:2379"}
	{"level":"info","ts":"2025-01-27T13:29:20.002173Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T13:29:20.004010Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"25303c38baa89c47","local-member-id":"226ef5c5e7cc232","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T13:29:20.009025Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T13:29:20.009123Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T13:29:20.004463Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-27T13:29:20.009143Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-01-27T13:29:20.015808Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-27T13:29:36.512668Z","caller":"traceutil/trace.go:171","msg":"trace[284892320] transaction","detail":"{read_only:false; response_revision:520; number_of_response:1; }","duration":"147.604778ms","start":"2025-01-27T13:29:36.365031Z","end":"2025-01-27T13:29:36.512636Z","steps":["trace[284892320] 'process raft request'  (duration: 147.424662ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T13:29:38.653390Z","caller":"traceutil/trace.go:171","msg":"trace[1544881277] transaction","detail":"{read_only:false; response_revision:521; number_of_response:1; }","duration":"121.227558ms","start":"2025-01-27T13:29:38.532095Z","end":"2025-01-27T13:29:38.653323Z","steps":["trace[1544881277] 'process raft request'  (duration: 121.057233ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T13:29:43.082157Z","caller":"traceutil/trace.go:171","msg":"trace[1665024725] transaction","detail":"{read_only:false; response_revision:536; number_of_response:1; }","duration":"119.49679ms","start":"2025-01-27T13:29:42.962640Z","end":"2025-01-27T13:29:43.082137Z","steps":["trace[1665024725] 'process raft request'  (duration: 119.35156ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T13:39:20.120188Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":866}
	{"level":"info","ts":"2025-01-27T13:39:20.164978Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":866,"took":"43.035265ms","hash":1232556171,"current-db-size-bytes":2945024,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":2945024,"current-db-size-in-use":"2.9 MB"}
	{"level":"info","ts":"2025-01-27T13:39:20.165083Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1232556171,"revision":866,"compact-revision":-1}
	{"level":"info","ts":"2025-01-27T13:44:20.133822Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1119}
	{"level":"info","ts":"2025-01-27T13:44:20.138970Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1119,"took":"4.038593ms","hash":3571641319,"current-db-size-bytes":2945024,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1765376,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-27T13:44:20.139223Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3571641319,"revision":1119,"compact-revision":866}
	{"level":"info","ts":"2025-01-27T13:49:20.141277Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1370}
	{"level":"info","ts":"2025-01-27T13:49:20.146100Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1370,"took":"4.047295ms","hash":2278871997,"current-db-size-bytes":2945024,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1789952,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-27T13:49:20.146163Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2278871997,"revision":1370,"compact-revision":1119}
	
	
	==> kernel <==
	 13:51:34 up 27 min,  0 users,  load average: 0.06, 0.25, 0.26
	Linux default-k8s-diff-port-325510 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c1f4ca8d06fb9ad738bcde5206113cdc84afd38b34a9e2a596d01ed030b99647] <==
	I0127 13:47:22.915775       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 13:47:22.915867       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 13:49:21.913751       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 13:49:21.914251       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0127 13:49:22.916309       1 handler_proxy.go:99] no RequestInfo found in the context
	W0127 13:49:22.916351       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 13:49:22.916839       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0127 13:49:22.917128       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 13:49:22.918367       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 13:49:22.918643       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 13:50:22.919739       1 handler_proxy.go:99] no RequestInfo found in the context
	W0127 13:50:22.919753       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 13:50:22.920181       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0127 13:50:22.920403       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 13:50:22.921710       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 13:50:22.921630       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [fd6c891095b900b78ffb3de6347ef01c66b586feb0c789bbe212473d58508a8a] <==
	E0127 13:46:58.714685       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:46:58.826902       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 13:47:28.721460       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:47:28.836444       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 13:47:58.729860       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:47:58.845132       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 13:48:28.737722       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:48:28.859799       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 13:48:38.202291       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-325510"
	E0127 13:48:58.743467       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:48:58.867898       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 13:49:28.750442       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:49:28.876102       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 13:49:58.760072       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:49:58.885688       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 13:50:28.767284       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:50:28.900219       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 13:50:34.062328       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="268.845µs"
	I0127 13:50:45.060647       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="144.364µs"
	I0127 13:50:48.692782       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="849.257µs"
	I0127 13:50:49.805348       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="79.184µs"
	E0127 13:50:58.775999       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:50:58.909866       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 13:51:28.782891       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:51:28.919475       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [bdff7f10e4adf0ee6186f47d8ad5a67d62072a18394ba88803323f74e873f031] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 13:29:31.395048       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 13:29:31.421789       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.61.7"]
	E0127 13:29:31.421857       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 13:29:31.884681       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 13:29:31.905091       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 13:29:31.911804       1 server_linux.go:170] "Using iptables Proxier"
	I0127 13:29:31.964998       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 13:29:31.966607       1 server.go:497] "Version info" version="v1.32.1"
	I0127 13:29:31.966658       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 13:29:31.976962       1 config.go:199] "Starting service config controller"
	I0127 13:29:31.977018       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 13:29:31.977068       1 config.go:105] "Starting endpoint slice config controller"
	I0127 13:29:31.977075       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 13:29:31.977197       1 config.go:329] "Starting node config controller"
	I0127 13:29:31.977314       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 13:29:32.077847       1 shared_informer.go:320] Caches are synced for node config
	I0127 13:29:32.078236       1 shared_informer.go:320] Caches are synced for service config
	I0127 13:29:32.078250       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [a3f76d46e8a9b8a5eaf536c8009ec4543ebb2b73a07c36912bd5323c86aeba05] <==
	W0127 13:29:21.940774       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 13:29:21.944513       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0127 13:29:22.809661       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0127 13:29:22.809717       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 13:29:22.871859       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0127 13:29:22.871918       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 13:29:22.892109       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 13:29:22.892161       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 13:29:22.902121       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0127 13:29:22.902238       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 13:29:22.914104       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0127 13:29:22.914175       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 13:29:23.025892       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 13:29:23.025968       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0127 13:29:23.069379       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 13:29:23.069460       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 13:29:23.101393       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 13:29:23.101466       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 13:29:23.215904       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0127 13:29:23.216001       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 13:29:23.226471       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 13:29:23.226540       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0127 13:29:23.330699       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0127 13:29:23.330734       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 13:29:25.821905       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 13:50:25 default-k8s-diff-port-325510 kubelet[3542]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 13:50:34 default-k8s-diff-port-325510 kubelet[3542]: E0127 13:50:34.043984    3542 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-vtvnn" podUID="07e0c335-6a2b-4ef3-b153-3689cdb7ccaf"
	Jan 27 13:50:36 default-k8s-diff-port-325510 kubelet[3542]: I0127 13:50:36.043118    3542 scope.go:117] "RemoveContainer" containerID="5aece725a367e81ad280d1c1650e71a5948a1028994b12d259c6b9175c4994e3"
	Jan 27 13:50:36 default-k8s-diff-port-325510 kubelet[3542]: E0127 13:50:36.044340    3542 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-pcpvf_kubernetes-dashboard(73631ff9-45c9-43d4-9070-26c5ef8175c3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-pcpvf" podUID="73631ff9-45c9-43d4-9070-26c5ef8175c3"
	Jan 27 13:50:45 default-k8s-diff-port-325510 kubelet[3542]: E0127 13:50:45.043915    3542 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-vtvnn" podUID="07e0c335-6a2b-4ef3-b153-3689cdb7ccaf"
	Jan 27 13:50:48 default-k8s-diff-port-325510 kubelet[3542]: I0127 13:50:48.042141    3542 scope.go:117] "RemoveContainer" containerID="5aece725a367e81ad280d1c1650e71a5948a1028994b12d259c6b9175c4994e3"
	Jan 27 13:50:48 default-k8s-diff-port-325510 kubelet[3542]: I0127 13:50:48.670974    3542 scope.go:117] "RemoveContainer" containerID="5aece725a367e81ad280d1c1650e71a5948a1028994b12d259c6b9175c4994e3"
	Jan 27 13:50:48 default-k8s-diff-port-325510 kubelet[3542]: I0127 13:50:48.671260    3542 scope.go:117] "RemoveContainer" containerID="c275df95d6ccd09494fdeed899af06b41d5121899be825922a0bb91e6f927602"
	Jan 27 13:50:48 default-k8s-diff-port-325510 kubelet[3542]: E0127 13:50:48.671427    3542 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-pcpvf_kubernetes-dashboard(73631ff9-45c9-43d4-9070-26c5ef8175c3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-pcpvf" podUID="73631ff9-45c9-43d4-9070-26c5ef8175c3"
	Jan 27 13:50:49 default-k8s-diff-port-325510 kubelet[3542]: I0127 13:50:49.790282    3542 scope.go:117] "RemoveContainer" containerID="c275df95d6ccd09494fdeed899af06b41d5121899be825922a0bb91e6f927602"
	Jan 27 13:50:49 default-k8s-diff-port-325510 kubelet[3542]: E0127 13:50:49.790474    3542 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-pcpvf_kubernetes-dashboard(73631ff9-45c9-43d4-9070-26c5ef8175c3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-pcpvf" podUID="73631ff9-45c9-43d4-9070-26c5ef8175c3"
	Jan 27 13:50:56 default-k8s-diff-port-325510 kubelet[3542]: E0127 13:50:56.043656    3542 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-vtvnn" podUID="07e0c335-6a2b-4ef3-b153-3689cdb7ccaf"
	Jan 27 13:51:02 default-k8s-diff-port-325510 kubelet[3542]: I0127 13:51:02.042911    3542 scope.go:117] "RemoveContainer" containerID="c275df95d6ccd09494fdeed899af06b41d5121899be825922a0bb91e6f927602"
	Jan 27 13:51:02 default-k8s-diff-port-325510 kubelet[3542]: E0127 13:51:02.043132    3542 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-pcpvf_kubernetes-dashboard(73631ff9-45c9-43d4-9070-26c5ef8175c3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-pcpvf" podUID="73631ff9-45c9-43d4-9070-26c5ef8175c3"
	Jan 27 13:51:08 default-k8s-diff-port-325510 kubelet[3542]: E0127 13:51:08.043389    3542 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-vtvnn" podUID="07e0c335-6a2b-4ef3-b153-3689cdb7ccaf"
	Jan 27 13:51:16 default-k8s-diff-port-325510 kubelet[3542]: I0127 13:51:16.042630    3542 scope.go:117] "RemoveContainer" containerID="c275df95d6ccd09494fdeed899af06b41d5121899be825922a0bb91e6f927602"
	Jan 27 13:51:16 default-k8s-diff-port-325510 kubelet[3542]: E0127 13:51:16.043433    3542 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-pcpvf_kubernetes-dashboard(73631ff9-45c9-43d4-9070-26c5ef8175c3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-pcpvf" podUID="73631ff9-45c9-43d4-9070-26c5ef8175c3"
	Jan 27 13:51:23 default-k8s-diff-port-325510 kubelet[3542]: E0127 13:51:23.043449    3542 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-vtvnn" podUID="07e0c335-6a2b-4ef3-b153-3689cdb7ccaf"
	Jan 27 13:51:25 default-k8s-diff-port-325510 kubelet[3542]: E0127 13:51:25.088518    3542 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 13:51:25 default-k8s-diff-port-325510 kubelet[3542]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 13:51:25 default-k8s-diff-port-325510 kubelet[3542]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 13:51:25 default-k8s-diff-port-325510 kubelet[3542]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 13:51:25 default-k8s-diff-port-325510 kubelet[3542]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 13:51:30 default-k8s-diff-port-325510 kubelet[3542]: I0127 13:51:30.042387    3542 scope.go:117] "RemoveContainer" containerID="c275df95d6ccd09494fdeed899af06b41d5121899be825922a0bb91e6f927602"
	Jan 27 13:51:30 default-k8s-diff-port-325510 kubelet[3542]: E0127 13:51:30.042633    3542 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-pcpvf_kubernetes-dashboard(73631ff9-45c9-43d4-9070-26c5ef8175c3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-pcpvf" podUID="73631ff9-45c9-43d4-9070-26c5ef8175c3"
	
	
	==> kubernetes-dashboard [ca1237bce720233a3e17ea61a6f4c55d1a98dd459ac9524871c82f40eedaa923] <==
	2025/01/27 13:39:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:39:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:40:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:40:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:41:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:41:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:42:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:42:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:43:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:43:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:44:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:44:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:45:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:45:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:46:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:46:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:47:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:47:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:48:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:48:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:49:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:49:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:50:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:50:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:51:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [a288c6acdbe355bf4541f5bbbc41530468ce94ee5c6fded53f7e3fa4529b1e24] <==
	I0127 13:29:32.133244       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 13:29:32.228922       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 13:29:32.229433       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 13:29:32.300004       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 13:29:32.306623       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-325510_a631c8bc-9dfd-482a-99c6-40aded9b3fce!
	I0127 13:29:32.308099       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"36b81dc6-b280-445e-addc-5d21eef0ddc2", APIVersion:"v1", ResourceVersion:"488", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-325510_a631c8bc-9dfd-482a-99c6-40aded9b3fce became leader
	I0127 13:29:32.407656       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-325510_a631c8bc-9dfd-482a-99c6-40aded9b3fce!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-325510 -n default-k8s-diff-port-325510
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-325510 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-vtvnn
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-325510 describe pod metrics-server-f79f97bbb-vtvnn
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-325510 describe pod metrics-server-f79f97bbb-vtvnn: exit status 1 (62.994494ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-vtvnn" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-325510 describe pod metrics-server-f79f97bbb-vtvnn: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (1642.10s)

                                                
                                    

Test pass (275/316)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.86
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.15
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.32.1/json-events 3.87
13 TestDownloadOnly/v1.32.1/preload-exists 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.06
18 TestDownloadOnly/v1.32.1/DeleteAll 0.14
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.62
22 TestOffline 85.75
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 212.72
29 TestAddons/serial/Volcano 40.88
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 8.51
35 TestAddons/parallel/Registry 19.53
36 TestAddons/parallel/Ingress 22.57
37 TestAddons/parallel/InspektorGadget 11.92
38 TestAddons/parallel/MetricsServer 5.78
40 TestAddons/parallel/CSI 51.54
41 TestAddons/parallel/Headlamp 27.97
42 TestAddons/parallel/CloudSpanner 5.61
43 TestAddons/parallel/LocalPath 11.27
44 TestAddons/parallel/NvidiaDevicePlugin 5.6
45 TestAddons/parallel/Yakd 10.82
47 TestAddons/StoppedEnableDisable 91.28
48 TestCertOptions 95.62
49 TestCertExpiration 325
51 TestForceSystemdFlag 54.27
52 TestForceSystemdEnv 73.31
54 TestKVMDriverInstallOrUpdate 4
58 TestErrorSpam/setup 44.47
59 TestErrorSpam/start 0.36
60 TestErrorSpam/status 0.77
61 TestErrorSpam/pause 1.64
62 TestErrorSpam/unpause 1.7
63 TestErrorSpam/stop 5.13
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 81.57
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 41.18
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.97
75 TestFunctional/serial/CacheCmd/cache/add_local 1.89
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.62
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 50.96
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.42
86 TestFunctional/serial/LogsFileCmd 1.44
87 TestFunctional/serial/InvalidService 4.58
89 TestFunctional/parallel/ConfigCmd 0.36
90 TestFunctional/parallel/DashboardCmd 30.43
91 TestFunctional/parallel/DryRun 0.32
92 TestFunctional/parallel/InternationalLanguage 0.24
93 TestFunctional/parallel/StatusCmd 1.08
97 TestFunctional/parallel/ServiceCmdConnect 10.55
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 42.36
101 TestFunctional/parallel/SSHCmd 0.45
102 TestFunctional/parallel/CpCmd 1.4
103 TestFunctional/parallel/MySQL 28.16
104 TestFunctional/parallel/FileSync 0.26
105 TestFunctional/parallel/CertSync 1.52
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.51
113 TestFunctional/parallel/License 0.17
123 TestFunctional/parallel/ServiceCmd/DeployApp 10.23
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
125 TestFunctional/parallel/ProfileCmd/profile_list 0.37
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.33
127 TestFunctional/parallel/MountCmd/any-port 7.51
128 TestFunctional/parallel/MountCmd/specific-port 2.07
129 TestFunctional/parallel/ServiceCmd/List 0.31
130 TestFunctional/parallel/ServiceCmd/JSONOutput 0.34
131 TestFunctional/parallel/ServiceCmd/HTTPS 0.32
132 TestFunctional/parallel/ServiceCmd/Format 0.4
133 TestFunctional/parallel/MountCmd/VerifyCleanup 0.96
134 TestFunctional/parallel/ServiceCmd/URL 0.54
135 TestFunctional/parallel/Version/short 0.05
136 TestFunctional/parallel/Version/components 0.61
137 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
138 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
139 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
144 TestFunctional/parallel/ImageCommands/ImageBuild 4.2
145 TestFunctional/parallel/ImageCommands/Setup 1.58
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.09
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.34
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.51
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.56
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.88
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.45
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 202.38
160 TestMultiControlPlane/serial/DeployApp 4.98
161 TestMultiControlPlane/serial/PingHostFromPods 1.23
162 TestMultiControlPlane/serial/AddWorkerNode 57.72
163 TestMultiControlPlane/serial/NodeLabels 0.08
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.91
165 TestMultiControlPlane/serial/CopyFile 13.63
166 TestMultiControlPlane/serial/StopSecondaryNode 91.7
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.67
168 TestMultiControlPlane/serial/RestartSecondaryNode 41.09
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.88
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 474.64
171 TestMultiControlPlane/serial/DeleteSecondaryNode 7.14
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.64
173 TestMultiControlPlane/serial/StopCluster 272.99
174 TestMultiControlPlane/serial/RestartCluster 124.71
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.64
176 TestMultiControlPlane/serial/AddSecondaryNode 72.27
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.9
181 TestJSONOutput/start/Command 84.16
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.73
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.7
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 6.64
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.22
209 TestMainNoArgs 0.05
210 TestMinikubeProfile 89.1
213 TestMountStart/serial/StartWithMountFirst 28.38
214 TestMountStart/serial/VerifyMountFirst 0.41
215 TestMountStart/serial/StartWithMountSecond 29.28
216 TestMountStart/serial/VerifyMountSecond 0.4
217 TestMountStart/serial/DeleteFirst 0.73
218 TestMountStart/serial/VerifyMountPostDelete 0.4
219 TestMountStart/serial/Stop 1.37
220 TestMountStart/serial/RestartStopped 23.14
221 TestMountStart/serial/VerifyMountPostStop 0.39
224 TestMultiNode/serial/FreshStart2Nodes 111.94
225 TestMultiNode/serial/DeployApp2Nodes 4
226 TestMultiNode/serial/PingHostFrom2Pods 0.8
227 TestMultiNode/serial/AddNode 50.84
228 TestMultiNode/serial/MultiNodeLabels 0.06
229 TestMultiNode/serial/ProfileList 0.6
230 TestMultiNode/serial/CopyFile 7.59
231 TestMultiNode/serial/StopNode 2.35
232 TestMultiNode/serial/StartAfterStop 36.36
233 TestMultiNode/serial/RestartKeepsNodes 314.21
234 TestMultiNode/serial/DeleteNode 2.25
235 TestMultiNode/serial/StopMultiNode 181.88
236 TestMultiNode/serial/RestartMultiNode 93.78
237 TestMultiNode/serial/ValidateNameConflict 47.22
242 TestPreload 264.28
244 TestScheduledStopUnix 115
248 TestRunningBinaryUpgrade 195.63
250 TestKubernetesUpgrade 208.3
253 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
254 TestNoKubernetes/serial/StartWithK8s 95.27
262 TestNetworkPlugins/group/false 3.25
266 TestNoKubernetes/serial/StartWithStopK8s 38.53
267 TestNoKubernetes/serial/Start 47.78
268 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
269 TestNoKubernetes/serial/ProfileList 1.76
270 TestNoKubernetes/serial/Stop 1.54
271 TestNoKubernetes/serial/StartNoArgs 44.8
273 TestPause/serial/Start 113.03
274 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
282 TestStoppedBinaryUpgrade/Setup 0.37
283 TestStoppedBinaryUpgrade/Upgrade 179.86
284 TestNetworkPlugins/group/auto/Start 121.26
285 TestPause/serial/SecondStartNoReconfiguration 93.41
286 TestStoppedBinaryUpgrade/MinikubeLogs 0.98
287 TestNetworkPlugins/group/kindnet/Start 64.04
288 TestNetworkPlugins/group/auto/KubeletFlags 0.23
289 TestNetworkPlugins/group/auto/NetCatPod 9.26
290 TestPause/serial/Pause 0.77
291 TestNetworkPlugins/group/auto/DNS 0.15
292 TestNetworkPlugins/group/auto/Localhost 0.19
293 TestPause/serial/VerifyStatus 0.28
294 TestNetworkPlugins/group/auto/HairPin 0.14
295 TestPause/serial/Unpause 0.66
296 TestPause/serial/PauseAgain 0.8
297 TestPause/serial/DeletePaused 1.09
298 TestPause/serial/VerifyDeletedResources 0.69
299 TestNetworkPlugins/group/calico/Start 86.24
300 TestNetworkPlugins/group/custom-flannel/Start 96.09
301 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
303 TestNetworkPlugins/group/kindnet/NetCatPod 8.24
304 TestNetworkPlugins/group/kindnet/DNS 0.19
305 TestNetworkPlugins/group/kindnet/Localhost 0.13
306 TestNetworkPlugins/group/kindnet/HairPin 0.17
307 TestNetworkPlugins/group/enable-default-cni/Start 88.8
308 TestNetworkPlugins/group/flannel/Start 97.47
309 TestNetworkPlugins/group/calico/ControllerPod 6.01
310 TestNetworkPlugins/group/calico/KubeletFlags 0.22
311 TestNetworkPlugins/group/calico/NetCatPod 10.27
312 TestNetworkPlugins/group/calico/DNS 0.18
313 TestNetworkPlugins/group/calico/Localhost 0.14
314 TestNetworkPlugins/group/calico/HairPin 0.17
315 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
316 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.28
317 TestNetworkPlugins/group/custom-flannel/DNS 0.19
318 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
319 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
320 TestNetworkPlugins/group/bridge/Start 100.54
322 TestStartStop/group/old-k8s-version/serial/FirstStart 201.05
323 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
324 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.33
325 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
326 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
327 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
328 TestNetworkPlugins/group/flannel/ControllerPod 6.01
329 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
330 TestNetworkPlugins/group/flannel/NetCatPod 13.36
332 TestStartStop/group/no-preload/serial/FirstStart 76.78
333 TestNetworkPlugins/group/flannel/DNS 0.16
334 TestNetworkPlugins/group/flannel/Localhost 0.12
335 TestNetworkPlugins/group/flannel/HairPin 0.13
337 TestStartStop/group/embed-certs/serial/FirstStart 88.84
338 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
339 TestNetworkPlugins/group/bridge/NetCatPod 11.29
340 TestNetworkPlugins/group/bridge/DNS 0.18
341 TestNetworkPlugins/group/bridge/Localhost 0.2
342 TestNetworkPlugins/group/bridge/HairPin 0.16
344 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 59.28
345 TestStartStop/group/no-preload/serial/DeployApp 10.29
346 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1
347 TestStartStop/group/no-preload/serial/Stop 91.02
348 TestStartStop/group/embed-certs/serial/DeployApp 9.31
349 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.29
350 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.06
351 TestStartStop/group/embed-certs/serial/Stop 91.08
352 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.07
353 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.21
354 TestStartStop/group/old-k8s-version/serial/DeployApp 8.45
355 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.94
356 TestStartStop/group/old-k8s-version/serial/Stop 91.17
357 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
359 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
361 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
363 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
364 TestStartStop/group/old-k8s-version/serial/SecondStart 174.32
365 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
366 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.08
367 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
368 TestStartStop/group/old-k8s-version/serial/Pause 2.74
370 TestStartStop/group/newest-cni/serial/FirstStart 53.48
371 TestStartStop/group/newest-cni/serial/DeployApp 0
372 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.48
373 TestStartStop/group/newest-cni/serial/Stop 7.39
374 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
375 TestStartStop/group/newest-cni/serial/SecondStart 39.87
376 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
377 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
378 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
379 TestStartStop/group/newest-cni/serial/Pause 2.97
x
+
TestDownloadOnly/v1.20.0/json-events (7.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-570570 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-570570 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (7.856140088s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0127 12:10:01.024143  474275 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0127 12:10:01.024262  474275 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20317-466901/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-570570
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-570570: exit status 85 (64.888127ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-570570 | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC |          |
	|         | -p download-only-570570        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 12:09:53
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 12:09:53.211047  474287 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:09:53.211155  474287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:09:53.211163  474287 out.go:358] Setting ErrFile to fd 2...
	I0127 12:09:53.211168  474287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:09:53.211400  474287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-466901/.minikube/bin
	W0127 12:09:53.211538  474287 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20317-466901/.minikube/config/config.json: open /home/jenkins/minikube-integration/20317-466901/.minikube/config/config.json: no such file or directory
	I0127 12:09:53.212106  474287 out.go:352] Setting JSON to true
	I0127 12:09:53.213025  474287 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":31890,"bootTime":1737947903,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:09:53.213129  474287 start.go:139] virtualization: kvm guest
	I0127 12:09:53.215521  474287 out.go:97] [download-only-570570] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0127 12:09:53.215658  474287 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20317-466901/.minikube/cache/preloaded-tarball: no such file or directory
	I0127 12:09:53.215688  474287 notify.go:220] Checking for updates...
	I0127 12:09:53.216940  474287 out.go:169] MINIKUBE_LOCATION=20317
	I0127 12:09:53.218420  474287 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:09:53.219772  474287 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20317-466901/kubeconfig
	I0127 12:09:53.220891  474287 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-466901/.minikube
	I0127 12:09:53.222057  474287 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0127 12:09:53.224259  474287 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0127 12:09:53.224525  474287 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:09:53.260346  474287 out.go:97] Using the kvm2 driver based on user configuration
	I0127 12:09:53.260378  474287 start.go:297] selected driver: kvm2
	I0127 12:09:53.260388  474287 start.go:901] validating driver "kvm2" against <nil>
	I0127 12:09:53.260768  474287 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:09:53.260897  474287 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20317-466901/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 12:09:53.276951  474287 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 12:09:53.277015  474287 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 12:09:53.277549  474287 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0127 12:09:53.277731  474287 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 12:09:53.277770  474287 cni.go:84] Creating CNI manager for ""
	I0127 12:09:53.277844  474287 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 12:09:53.277857  474287 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 12:09:53.277953  474287 start.go:340] cluster config:
	{Name:download-only-570570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-570570 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:09:53.278175  474287 iso.go:125] acquiring lock: {Name:mkcc3db98c9d4661e75c49bd9b203d0232dff8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:09:53.280064  474287 out.go:97] Downloading VM boot image ...
	I0127 12:09:53.280117  474287 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20317-466901/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 12:09:55.669353  474287 out.go:97] Starting "download-only-570570" primary control-plane node in "download-only-570570" cluster
	I0127 12:09:55.669387  474287 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0127 12:09:55.694973  474287 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0127 12:09:55.695024  474287 cache.go:56] Caching tarball of preloaded images
	I0127 12:09:55.695241  474287 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0127 12:09:55.696931  474287 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0127 12:09:55.696956  474287 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0127 12:09:55.719038  474287 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/20317-466901/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-570570 host does not exist
	  To start a cluster, run: "minikube start -p download-only-570570"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-570570
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (3.87s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-227733 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-227733 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (3.868172699s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (3.87s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0127 12:10:05.240870  474275 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0127 12:10:05.240926  474275 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20317-466901/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-227733
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-227733: exit status 85 (64.059973ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-570570 | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC |                     |
	|         | -p download-only-570570        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 27 Jan 25 12:10 UTC | 27 Jan 25 12:10 UTC |
	| delete  | -p download-only-570570        | download-only-570570 | jenkins | v1.35.0 | 27 Jan 25 12:10 UTC | 27 Jan 25 12:10 UTC |
	| start   | -o=json --download-only        | download-only-227733 | jenkins | v1.35.0 | 27 Jan 25 12:10 UTC |                     |
	|         | -p download-only-227733        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 12:10:01
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 12:10:01.417334  474492 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:10:01.417619  474492 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:10:01.417630  474492 out.go:358] Setting ErrFile to fd 2...
	I0127 12:10:01.417634  474492 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:10:01.417838  474492 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-466901/.minikube/bin
	I0127 12:10:01.418447  474492 out.go:352] Setting JSON to true
	I0127 12:10:01.419480  474492 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":31898,"bootTime":1737947903,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:10:01.419592  474492 start.go:139] virtualization: kvm guest
	I0127 12:10:01.421641  474492 out.go:97] [download-only-227733] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 12:10:01.421822  474492 notify.go:220] Checking for updates...
	I0127 12:10:01.423321  474492 out.go:169] MINIKUBE_LOCATION=20317
	I0127 12:10:01.424587  474492 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:10:01.426041  474492 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20317-466901/kubeconfig
	I0127 12:10:01.427532  474492 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-466901/.minikube
	I0127 12:10:01.428897  474492 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-227733 host does not exist
	  To start a cluster, run: "minikube start -p download-only-227733"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-227733
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I0127 12:10:05.852814  474275 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-243152 --alsologtostderr --binary-mirror http://127.0.0.1:34681 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-243152" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-243152
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestOffline (85.75s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-871491 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-871491 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m24.508005751s)
helpers_test.go:175: Cleaning up "offline-containerd-871491" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-871491
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-871491: (1.237832991s)
--- PASS: TestOffline (85.75s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-728052
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-728052: exit status 85 (55.284355ms)

                                                
                                                
-- stdout --
	* Profile "addons-728052" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-728052"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-728052
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-728052: exit status 85 (53.764203ms)

                                                
                                                
-- stdout --
	* Profile "addons-728052" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-728052"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (212.72s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-728052 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-728052 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m32.719338747s)
--- PASS: TestAddons/Setup (212.72s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.88s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:815: volcano-admission stabilized in 29.438643ms
addons_test.go:807: volcano-scheduler stabilized in 29.492352ms
addons_test.go:823: volcano-controller stabilized in 29.556022ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-7ff7cd6989-gh652" [1ded3c67-afa9-40fc-aa9c-2a30af71ab81] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.005586303s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-57676bd54c-wk4pg" [202323da-748e-4276-875f-c1cb71d59741] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003751819s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-77df547cdf-9rhvg" [ed9c1c7e-01b1-414e-b7fa-c5c71e8a5da5] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004715678s
addons_test.go:842: (dbg) Run:  kubectl --context addons-728052 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-728052 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-728052 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [9cd3789e-104f-4a45-93d2-df378a103d23] Pending
helpers_test.go:344: "test-job-nginx-0" [9cd3789e-104f-4a45-93d2-df378a103d23] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [9cd3789e-104f-4a45-93d2-df378a103d23] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.004500974s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-728052 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-728052 addons disable volcano --alsologtostderr -v=1: (11.464992588s)
--- PASS: TestAddons/serial/Volcano (40.88s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-728052 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-728052 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.51s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-728052 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-728052 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [df98110c-f111-472d-9bb5-cbf39cf677d0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [df98110c-f111-472d-9bb5-cbf39cf677d0] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.004495445s
addons_test.go:633: (dbg) Run:  kubectl --context addons-728052 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-728052 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-728052 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.51s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 7.68839ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-594qc" [bcc410b0-1bf0-44fe-946f-e8082ad79523] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003649157s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-hf99d" [cd0caedb-3be5-49af-a0d9-2fadf88aa273] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004566655s
addons_test.go:331: (dbg) Run:  kubectl --context addons-728052 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-728052 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-728052 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.759951282s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-728052 ip
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-728052 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.53s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (22.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-728052 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-728052 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-728052 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [927f32d9-d47b-4400-898c-be79b041681b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [927f32d9-d47b-4400-898c-be79b041681b] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.004317383s
I0127 12:15:13.363147  474275 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-728052 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-728052 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-728052 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.120
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-728052 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-728052 addons disable ingress-dns --alsologtostderr -v=1: (1.426232089s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-728052 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-728052 addons disable ingress --alsologtostderr -v=1: (7.824476847s)
--- PASS: TestAddons/parallel/Ingress (22.57s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.92s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-pbqnn" [47369a0f-51ed-4977-9eed-e8ea26254b7d] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004661332s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-728052 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-728052 addons disable inspektor-gadget --alsologtostderr -v=1: (5.915561553s)
--- PASS: TestAddons/parallel/InspektorGadget (11.92s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.78s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 7.905587ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-pvwlz" [7c6043a3-74f9-4d8c-9bf2-80f69889bb4a] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00425929s
addons_test.go:402: (dbg) Run:  kubectl --context addons-728052 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-728052 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.78s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0127 12:14:57.217804  474275 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0127 12:14:57.231757  474275 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0127 12:14:57.231787  474275 kapi.go:107] duration metric: took 14.013449ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 14.022752ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-728052 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-728052 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-728052 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-728052 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-728052 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-728052 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-728052 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [ac704886-d70c-48f7-a58b-2b157f61e10d] Pending
helpers_test.go:344: "task-pv-pod" [ac704886-d70c-48f7-a58b-2b157f61e10d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [ac704886-d70c-48f7-a58b-2b157f61e10d] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.005675142s
addons_test.go:511: (dbg) Run:  kubectl --context addons-728052 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-728052 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-728052 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-728052 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-728052 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-728052 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-728052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-728052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-728052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-728052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-728052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-728052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-728052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-728052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-728052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-728052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-728052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-728052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-728052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-728052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-728052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-728052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-728052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-728052 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [ef9bda4b-db08-44e3-ae79-399379fc9431] Pending
helpers_test.go:344: "task-pv-pod-restore" [ef9bda4b-db08-44e3-ae79-399379fc9431] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [ef9bda4b-db08-44e3-ae79-399379fc9431] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00399064s
addons_test.go:553: (dbg) Run:  kubectl --context addons-728052 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-728052 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-728052 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-728052 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-728052 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-728052 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.837845408s)
--- PASS: TestAddons/parallel/CSI (51.54s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (27.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-728052 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-69d78d796f-kvrtb" [a7e4c846-416f-41b2-b426-b92f8125d187] Pending
helpers_test.go:344: "headlamp-69d78d796f-kvrtb" [a7e4c846-416f-41b2-b426-b92f8125d187] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-69d78d796f-kvrtb" [a7e4c846-416f-41b2-b426-b92f8125d187] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 21.004782765s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-728052 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-728052 addons disable headlamp --alsologtostderr -v=1: (6.039897252s)
--- PASS: TestAddons/parallel/Headlamp (27.97s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-kgfqk" [29c0e36d-74c4-4ba2-9ee2-f1506f8261a6] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004042006s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-728052 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.61s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (11.27s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-728052 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-728052 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-728052 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-728052 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-728052 get pvc test-pvc -o jsonpath={.status.phase} -n default
2025/01/27 12:14:56 [DEBUG] GET http://192.168.39.120:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-728052 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-728052 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-728052 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-728052 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [85401985-85c9-46be-bcbb-22fa561ba15a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [85401985-85c9-46be-bcbb-22fa561ba15a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [85401985-85c9-46be-bcbb-22fa561ba15a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.005108536s
addons_test.go:906: (dbg) Run:  kubectl --context addons-728052 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-728052 ssh "cat /opt/local-path-provisioner/pvc-eb7f06d5-3375-4a7a-847c-331c1d07f8c6_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-728052 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-728052 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-728052 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (11.27s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-7bscl" [01810171-c627-40fc-86a2-116717b524d9] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005922144s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-728052 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.60s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-kvwnx" [30ad00e1-10cc-4e22-81ff-d1edfdf8f349] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004537854s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-728052 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-728052 addons disable yakd --alsologtostderr -v=1: (5.809826233s)
--- PASS: TestAddons/parallel/Yakd (10.82s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.28s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-728052
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-728052: (1m30.979361993s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-728052
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-728052
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-728052
--- PASS: TestAddons/StoppedEnableDisable (91.28s)

                                                
                                    
x
+
TestCertOptions (95.62s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-425056 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
E0127 13:13:39.254180  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-425056 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m34.320263656s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-425056 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-425056 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-425056 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-425056" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-425056
--- PASS: TestCertOptions (95.62s)

                                                
                                    
x
+
TestCertExpiration (325s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-136116 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-136116 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m19.929315447s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-136116 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-136116 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (1m3.976692677s)
helpers_test.go:175: Cleaning up "cert-expiration-136116" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-136116
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-136116: (1.088788659s)
--- PASS: TestCertExpiration (325.00s)

                                                
                                    
x
+
TestForceSystemdFlag (54.27s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-925568 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-925568 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (52.985971592s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-925568 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-925568" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-925568
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-925568: (1.063439676s)
--- PASS: TestForceSystemdFlag (54.27s)

                                                
                                    
x
+
TestForceSystemdEnv (73.31s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-716358 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-716358 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m12.168284364s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-716358 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-716358" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-716358
--- PASS: TestForceSystemdEnv (73.31s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0127 13:11:39.358362  474275 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 13:11:39.358521  474275 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0127 13:11:39.388962  474275 install.go:62] docker-machine-driver-kvm2: exit status 1
W0127 13:11:39.389324  474275 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0127 13:11:39.389378  474275 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4040681930/001/docker-machine-driver-kvm2
I0127 13:11:39.615543  474275 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate4040681930/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0] Decompressors:map[bz2:0xc00088d780 gz:0xc00088d788 tar:0xc00088d710 tar.bz2:0xc00088d730 tar.gz:0xc00088d750 tar.xz:0xc00088d760 tar.zst:0xc00088d770 tbz2:0xc00088d730 tgz:0xc00088d750 txz:0xc00088d760 tzst:0xc00088d770 xz:0xc00088d790 zip:0xc00088d7a0 zst:0xc00088d798] Getters:map[file:0xc001aa59b0 http:0xc00046fe00 https:0xc00046fe50] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0127 13:11:39.615604  474275 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4040681930/001/docker-machine-driver-kvm2
I0127 13:11:41.628954  474275 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 13:11:41.629049  474275 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0127 13:11:41.667671  474275 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0127 13:11:41.667719  474275 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0127 13:11:41.667816  474275 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0127 13:11:41.667854  474275 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4040681930/002/docker-machine-driver-kvm2
I0127 13:11:41.728207  474275 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate4040681930/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0] Decompressors:map[bz2:0xc00088d780 gz:0xc00088d788 tar:0xc00088d710 tar.bz2:0xc00088d730 tar.gz:0xc00088d750 tar.xz:0xc00088d760 tar.zst:0xc00088d770 tbz2:0xc00088d730 tgz:0xc00088d750 txz:0xc00088d760 tzst:0xc00088d770 xz:0xc00088d790 zip:0xc00088d7a0 zst:0xc00088d798] Getters:map[file:0xc0008b5810 http:0xc000c16dc0 https:0xc000c16e10] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0127 13:11:41.728257  474275 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4040681930/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.00s)

                                                
                                    
x
+
TestErrorSpam/setup (44.47s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-259907 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-259907 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-259907 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-259907 --driver=kvm2  --container-runtime=containerd: (44.469343147s)
--- PASS: TestErrorSpam/setup (44.47s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-259907 --log_dir /tmp/nospam-259907 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-259907 --log_dir /tmp/nospam-259907 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-259907 --log_dir /tmp/nospam-259907 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-259907 --log_dir /tmp/nospam-259907 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-259907 --log_dir /tmp/nospam-259907 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-259907 --log_dir /tmp/nospam-259907 status
--- PASS: TestErrorSpam/status (0.77s)

                                                
                                    
x
+
TestErrorSpam/pause (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-259907 --log_dir /tmp/nospam-259907 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-259907 --log_dir /tmp/nospam-259907 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-259907 --log_dir /tmp/nospam-259907 pause
--- PASS: TestErrorSpam/pause (1.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-259907 --log_dir /tmp/nospam-259907 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-259907 --log_dir /tmp/nospam-259907 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-259907 --log_dir /tmp/nospam-259907 unpause
--- PASS: TestErrorSpam/unpause (1.70s)

                                                
                                    
x
+
TestErrorSpam/stop (5.13s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-259907 --log_dir /tmp/nospam-259907 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-259907 --log_dir /tmp/nospam-259907 stop: (1.533072765s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-259907 --log_dir /tmp/nospam-259907 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-259907 --log_dir /tmp/nospam-259907 stop: (1.538675811s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-259907 --log_dir /tmp/nospam-259907 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-259907 --log_dir /tmp/nospam-259907 stop: (2.056552818s)
--- PASS: TestErrorSpam/stop (5.13s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20317-466901/.minikube/files/etc/test/nested/copy/474275/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (81.57s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-293873 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0127 12:18:39.260282  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:18:39.266706  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:18:39.278113  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:18:39.299574  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:18:39.341026  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:18:39.422544  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:18:39.584072  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:18:39.905800  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:18:40.547238  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:18:41.828928  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:18:44.391346  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:18:49.513211  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:18:59.755029  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:19:20.236512  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-293873 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m21.567009476s)
--- PASS: TestFunctional/serial/StartWithProxy (81.57s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.18s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0127 12:19:37.106374  474275 config.go:182] Loaded profile config "functional-293873": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-293873 --alsologtostderr -v=8
E0127 12:20:01.199387  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-293873 --alsologtostderr -v=8: (41.178005631s)
functional_test.go:663: soft start took 41.178810997s for "functional-293873" cluster.
I0127 12:20:18.284774  474275 config.go:182] Loaded profile config "functional-293873": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (41.18s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-293873 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.97s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-293873 cache add registry.k8s.io/pause:3.3: (1.036092027s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.97s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-293873 /tmp/TestFunctionalserialCacheCmdcacheadd_local3816871029/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 cache add minikube-local-cache-test:functional-293873
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-293873 cache add minikube-local-cache-test:functional-293873: (1.545966153s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 cache delete minikube-local-cache-test:functional-293873
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-293873
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-293873 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (230.697821ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 kubectl -- --context functional-293873 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-293873 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (50.96s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-293873 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-293873 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (50.958330173s)
functional_test.go:761: restart took 50.958458283s for "functional-293873" cluster.
I0127 12:21:16.515833  474275 config.go:182] Loaded profile config "functional-293873": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (50.96s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-293873 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-293873 logs: (1.419687585s)
--- PASS: TestFunctional/serial/LogsCmd (1.42s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 logs --file /tmp/TestFunctionalserialLogsFileCmd4143235206/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-293873 logs --file /tmp/TestFunctionalserialLogsFileCmd4143235206/001/logs.txt: (1.435660299s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.58s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-293873 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-293873
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-293873: exit status 115 (293.236815ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.143:30347 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-293873 delete -f testdata/invalidsvc.yaml
E0127 12:21:23.121685  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2327: (dbg) Done: kubectl --context functional-293873 delete -f testdata/invalidsvc.yaml: (1.072043299s)
--- PASS: TestFunctional/serial/InvalidService (4.58s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-293873 config get cpus: exit status 14 (62.613551ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-293873 config get cpus: exit status 14 (55.107167ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (30.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-293873 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-293873 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 482904: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (30.43s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-293873 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-293873 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (148.947246ms)

                                                
                                                
-- stdout --
	* [functional-293873] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20317
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20317-466901/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-466901/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:21:36.382789  482434 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:21:36.383044  482434 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:21:36.383055  482434 out.go:358] Setting ErrFile to fd 2...
	I0127 12:21:36.383060  482434 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:21:36.383347  482434 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-466901/.minikube/bin
	I0127 12:21:36.383981  482434 out.go:352] Setting JSON to false
	I0127 12:21:36.385052  482434 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":32593,"bootTime":1737947903,"procs":260,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:21:36.385182  482434 start.go:139] virtualization: kvm guest
	I0127 12:21:36.387341  482434 out.go:177] * [functional-293873] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 12:21:36.388944  482434 notify.go:220] Checking for updates...
	I0127 12:21:36.389010  482434 out.go:177]   - MINIKUBE_LOCATION=20317
	I0127 12:21:36.390553  482434 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:21:36.391902  482434 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20317-466901/kubeconfig
	I0127 12:21:36.393182  482434 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-466901/.minikube
	I0127 12:21:36.394571  482434 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 12:21:36.395901  482434 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:21:36.397915  482434 config.go:182] Loaded profile config "functional-293873": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:21:36.398371  482434 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:21:36.398446  482434 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:21:36.415075  482434 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37877
	I0127 12:21:36.415567  482434 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:21:36.416168  482434 main.go:141] libmachine: Using API Version  1
	I0127 12:21:36.416198  482434 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:21:36.416592  482434 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:21:36.416832  482434 main.go:141] libmachine: (functional-293873) Calling .DriverName
	I0127 12:21:36.417175  482434 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:21:36.417539  482434 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:21:36.417604  482434 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:21:36.433876  482434 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37619
	I0127 12:21:36.434468  482434 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:21:36.435113  482434 main.go:141] libmachine: Using API Version  1
	I0127 12:21:36.435134  482434 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:21:36.435496  482434 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:21:36.435734  482434 main.go:141] libmachine: (functional-293873) Calling .DriverName
	I0127 12:21:36.473505  482434 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 12:21:36.474781  482434 start.go:297] selected driver: kvm2
	I0127 12:21:36.474802  482434 start.go:901] validating driver "kvm2" against &{Name:functional-293873 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-293873 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.143 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:21:36.474927  482434 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:21:36.477013  482434 out.go:201] 
	W0127 12:21:36.478387  482434 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0127 12:21:36.479757  482434 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-293873 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-293873 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-293873 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (238.087671ms)

                                                
                                                
-- stdout --
	* [functional-293873] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20317
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20317-466901/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-466901/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:21:36.201985  482318 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:21:36.202227  482318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:21:36.202243  482318 out.go:358] Setting ErrFile to fd 2...
	I0127 12:21:36.202249  482318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:21:36.202700  482318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-466901/.minikube/bin
	I0127 12:21:36.203454  482318 out.go:352] Setting JSON to false
	I0127 12:21:36.204906  482318 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":32593,"bootTime":1737947903,"procs":253,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:21:36.205042  482318 start.go:139] virtualization: kvm guest
	I0127 12:21:36.218037  482318 out.go:177] * [functional-293873] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0127 12:21:36.219601  482318 out.go:177]   - MINIKUBE_LOCATION=20317
	I0127 12:21:36.219741  482318 notify.go:220] Checking for updates...
	I0127 12:21:36.222853  482318 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:21:36.224454  482318 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20317-466901/kubeconfig
	I0127 12:21:36.225956  482318 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-466901/.minikube
	I0127 12:21:36.227838  482318 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 12:21:36.231822  482318 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:21:36.233918  482318 config.go:182] Loaded profile config "functional-293873": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:21:36.234692  482318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:21:36.234777  482318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:21:36.255595  482318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35007
	I0127 12:21:36.257927  482318 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:21:36.258746  482318 main.go:141] libmachine: Using API Version  1
	I0127 12:21:36.258796  482318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:21:36.259383  482318 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:21:36.259857  482318 main.go:141] libmachine: (functional-293873) Calling .DriverName
	I0127 12:21:36.260121  482318 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:21:36.260406  482318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:21:36.260431  482318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:21:36.281639  482318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38195
	I0127 12:21:36.282294  482318 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:21:36.283365  482318 main.go:141] libmachine: Using API Version  1
	I0127 12:21:36.283405  482318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:21:36.283829  482318 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:21:36.284089  482318 main.go:141] libmachine: (functional-293873) Calling .DriverName
	I0127 12:21:36.323832  482318 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0127 12:21:36.325306  482318 start.go:297] selected driver: kvm2
	I0127 12:21:36.325331  482318 start.go:901] validating driver "kvm2" against &{Name:functional-293873 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-293873 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.143 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:21:36.325499  482318 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:21:36.328076  482318 out.go:201] 
	W0127 12:21:36.329303  482318 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0127 12:21:36.330593  482318 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-293873 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-293873 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-zjwwm" [8876ae9a-f8a2-4389-a175-47b0f56a0e8a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-zjwwm" [8876ae9a-f8a2-4389-a175-47b0f56a0e8a] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003964698s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.143:32519
functional_test.go:1675: http://192.168.39.143:32519: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-zjwwm

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.143:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.143:32519
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.55s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (42.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [2bd5adb4-4fbb-4b99-931f-52eff6f6a3c4] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.169516693s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-293873 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-293873 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-293873 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-293873 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ec7c0570-32f8-45f5-bcb7-5378ecce6a8b] Pending
helpers_test.go:344: "sp-pod" [ec7c0570-32f8-45f5-bcb7-5378ecce6a8b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ec7c0570-32f8-45f5-bcb7-5378ecce6a8b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.003817705s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-293873 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-293873 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-293873 delete -f testdata/storage-provisioner/pod.yaml: (1.358015437s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-293873 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [34ab41b7-664e-4ce8-8e73-a6bb6185f90a] Pending
helpers_test.go:344: "sp-pod" [34ab41b7-664e-4ce8-8e73-a6bb6185f90a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [34ab41b7-664e-4ce8-8e73-a6bb6185f90a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.004468686s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-293873 exec sp-pod -- ls /tmp/mount
2025/01/27 12:22:06 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (42.36s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 ssh -n functional-293873 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 cp functional-293873:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2840855202/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 ssh -n functional-293873 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 ssh -n functional-293873 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-293873 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-mhb7c" [50bdd19e-931c-45c7-8a26-04467da23b17] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-mhb7c" [50bdd19e-931c-45c7-8a26-04467da23b17] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.004599046s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-293873 exec mysql-58ccfd96bb-mhb7c -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-293873 exec mysql-58ccfd96bb-mhb7c -- mysql -ppassword -e "show databases;": exit status 1 (185.139883ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0127 12:21:56.858715  474275 retry.go:31] will retry after 963.392147ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-293873 exec mysql-58ccfd96bb-mhb7c -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-293873 exec mysql-58ccfd96bb-mhb7c -- mysql -ppassword -e "show databases;": exit status 1 (218.030595ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0127 12:21:58.040592  474275 retry.go:31] will retry after 1.282110612s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-293873 exec mysql-58ccfd96bb-mhb7c -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-293873 exec mysql-58ccfd96bb-mhb7c -- mysql -ppassword -e "show databases;": exit status 1 (170.982494ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0127 12:21:59.494800  474275 retry.go:31] will retry after 1.871994204s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-293873 exec mysql-58ccfd96bb-mhb7c -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-293873 exec mysql-58ccfd96bb-mhb7c -- mysql -ppassword -e "show databases;": exit status 1 (118.477538ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0127 12:22:01.486492  474275 retry.go:31] will retry after 4.011267392s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-293873 exec mysql-58ccfd96bb-mhb7c -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.16s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/474275/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 ssh "sudo cat /etc/test/nested/copy/474275/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/474275.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 ssh "sudo cat /etc/ssl/certs/474275.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/474275.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 ssh "sudo cat /usr/share/ca-certificates/474275.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/4742752.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 ssh "sudo cat /etc/ssl/certs/4742752.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/4742752.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 ssh "sudo cat /usr/share/ca-certificates/4742752.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-293873 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-293873 ssh "sudo systemctl is-active docker": exit status 1 (252.930263ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-293873 ssh "sudo systemctl is-active crio": exit status 1 (254.191306ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-293873 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-293873 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-dd4q8" [2de74bb8-4c31-4cb7-8207-0099afaef233] Pending
helpers_test.go:344: "hello-node-fcfd88b6f-dd4q8" [2de74bb8-4c31-4cb7-8207-0099afaef233] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-dd4q8" [2de74bb8-4c31-4cb7-8207-0099afaef233] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.004405487s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "318.338364ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "50.777542ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "281.658309ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "49.445355ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-293873 /tmp/TestFunctionalparallelMountCmdany-port4050458197/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1737980486509557909" to /tmp/TestFunctionalparallelMountCmdany-port4050458197/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1737980486509557909" to /tmp/TestFunctionalparallelMountCmdany-port4050458197/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1737980486509557909" to /tmp/TestFunctionalparallelMountCmdany-port4050458197/001/test-1737980486509557909
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-293873 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (228.995181ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 12:21:26.738839  474275 retry.go:31] will retry after 419.501682ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 27 12:21 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 27 12:21 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 27 12:21 test-1737980486509557909
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 ssh cat /mount-9p/test-1737980486509557909
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-293873 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [937fda52-69c7-4de1-8190-82fe4d4ab018] Pending
helpers_test.go:344: "busybox-mount" [937fda52-69c7-4de1-8190-82fe4d4ab018] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [937fda52-69c7-4de1-8190-82fe4d4ab018] Running
helpers_test.go:344: "busybox-mount" [937fda52-69c7-4de1-8190-82fe4d4ab018] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [937fda52-69c7-4de1-8190-82fe4d4ab018] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003964538s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-293873 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-293873 /tmp/TestFunctionalparallelMountCmdany-port4050458197/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-293873 /tmp/TestFunctionalparallelMountCmdspecific-port181010303/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-293873 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (219.580315ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 12:21:34.234587  474275 retry.go:31] will retry after 673.750551ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-293873 /tmp/TestFunctionalparallelMountCmdspecific-port181010303/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-293873 ssh "sudo umount -f /mount-9p": exit status 1 (256.871757ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-293873 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-293873 /tmp/TestFunctionalparallelMountCmdspecific-port181010303/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 service list -o json
functional_test.go:1494: Took "338.107711ms" to run "out/minikube-linux-amd64 -p functional-293873 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.143:31146
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-293873 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3690245813/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-293873 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3690245813/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-293873 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3690245813/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-293873 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-293873 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3690245813/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-293873 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3690245813/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-293873 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3690245813/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.143:31146
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-293873 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-293873
docker.io/kindest/kindnetd:v20241108-5c6d2daf
docker.io/kicbase/echo-server:functional-293873
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-293873 image ls --format short --alsologtostderr:
I0127 12:21:49.805288  483462 out.go:345] Setting OutFile to fd 1 ...
I0127 12:21:49.805447  483462 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:21:49.805465  483462 out.go:358] Setting ErrFile to fd 2...
I0127 12:21:49.805479  483462 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:21:49.805723  483462 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-466901/.minikube/bin
I0127 12:21:49.806329  483462 config.go:182] Loaded profile config "functional-293873": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 12:21:49.806432  483462 config.go:182] Loaded profile config "functional-293873": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 12:21:49.806793  483462 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:21:49.806838  483462 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:21:49.822845  483462 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33655
I0127 12:21:49.823430  483462 main.go:141] libmachine: () Calling .GetVersion
I0127 12:21:49.824033  483462 main.go:141] libmachine: Using API Version  1
I0127 12:21:49.824057  483462 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:21:49.824416  483462 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:21:49.824599  483462 main.go:141] libmachine: (functional-293873) Calling .GetState
I0127 12:21:49.826374  483462 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:21:49.826432  483462 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:21:49.842113  483462 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46479
I0127 12:21:49.842606  483462 main.go:141] libmachine: () Calling .GetVersion
I0127 12:21:49.843122  483462 main.go:141] libmachine: Using API Version  1
I0127 12:21:49.843149  483462 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:21:49.843510  483462 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:21:49.843723  483462 main.go:141] libmachine: (functional-293873) Calling .DriverName
I0127 12:21:49.843907  483462 ssh_runner.go:195] Run: systemctl --version
I0127 12:21:49.843933  483462 main.go:141] libmachine: (functional-293873) Calling .GetSSHHostname
I0127 12:21:49.846456  483462 main.go:141] libmachine: (functional-293873) DBG | domain functional-293873 has defined MAC address 52:54:00:59:3e:d4 in network mk-functional-293873
I0127 12:21:49.846922  483462 main.go:141] libmachine: (functional-293873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:3e:d4", ip: ""} in network mk-functional-293873: {Iface:virbr1 ExpiryTime:2025-01-27 13:18:31 +0000 UTC Type:0 Mac:52:54:00:59:3e:d4 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:functional-293873 Clientid:01:52:54:00:59:3e:d4}
I0127 12:21:49.846956  483462 main.go:141] libmachine: (functional-293873) DBG | domain functional-293873 has defined IP address 192.168.39.143 and MAC address 52:54:00:59:3e:d4 in network mk-functional-293873
I0127 12:21:49.847070  483462 main.go:141] libmachine: (functional-293873) Calling .GetSSHPort
I0127 12:21:49.847277  483462 main.go:141] libmachine: (functional-293873) Calling .GetSSHKeyPath
I0127 12:21:49.847448  483462 main.go:141] libmachine: (functional-293873) Calling .GetSSHUsername
I0127 12:21:49.847621  483462 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/functional-293873/id_rsa Username:docker}
I0127 12:21:49.937863  483462 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 12:21:49.988101  483462 main.go:141] libmachine: Making call to close driver server
I0127 12:21:49.988117  483462 main.go:141] libmachine: (functional-293873) Calling .Close
I0127 12:21:49.988454  483462 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:21:49.988500  483462 main.go:141] libmachine: (functional-293873) DBG | Closing plugin on server side
I0127 12:21:49.988502  483462 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:21:49.988544  483462 main.go:141] libmachine: Making call to close driver server
I0127 12:21:49.988557  483462 main.go:141] libmachine: (functional-293873) Calling .Close
I0127 12:21:49.988799  483462 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:21:49.988817  483462 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:21:49.988832  483462 main.go:141] libmachine: (functional-293873) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-293873 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.10               | sha256:873ed7 | 320kB  |
| docker.io/kindest/kindnetd                  | v20241108-5c6d2daf | sha256:50415e | 38.6MB |
| docker.io/library/minikube-local-cache-test | functional-293873  | sha256:fdf392 | 992B   |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/kube-apiserver              | v1.32.1            | sha256:95c0bd | 28.7MB |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| docker.io/kicbase/echo-server               | functional-293873  | sha256:9056ab | 2.37MB |
| docker.io/library/mysql                     | 5.7                | sha256:510733 | 138MB  |
| localhost/my-image                          | functional-293873  | sha256:3bdf90 | 775kB  |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:c69fa2 | 18.6MB |
| registry.k8s.io/kube-proxy                  | v1.32.1            | sha256:e29f9c | 30.9MB |
| registry.k8s.io/kube-scheduler              | v1.32.1            | sha256:2b0d65 | 20.7MB |
| docker.io/library/nginx                     | latest             | sha256:9bea9f | 72.1MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| registry.k8s.io/etcd                        | 3.5.16-0           | sha256:a9e7e6 | 57.7MB |
| registry.k8s.io/kube-controller-manager     | v1.32.1            | sha256:019ee1 | 26.3MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-293873 image ls --format table --alsologtostderr:
I0127 12:21:54.751865  483626 out.go:345] Setting OutFile to fd 1 ...
I0127 12:21:54.751986  483626 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:21:54.751996  483626 out.go:358] Setting ErrFile to fd 2...
I0127 12:21:54.752001  483626 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:21:54.752191  483626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-466901/.minikube/bin
I0127 12:21:54.752850  483626 config.go:182] Loaded profile config "functional-293873": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 12:21:54.752971  483626 config.go:182] Loaded profile config "functional-293873": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 12:21:54.753374  483626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:21:54.753444  483626 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:21:54.769234  483626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33589
I0127 12:21:54.769756  483626 main.go:141] libmachine: () Calling .GetVersion
I0127 12:21:54.770454  483626 main.go:141] libmachine: Using API Version  1
I0127 12:21:54.770477  483626 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:21:54.770863  483626 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:21:54.771088  483626 main.go:141] libmachine: (functional-293873) Calling .GetState
I0127 12:21:54.773179  483626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:21:54.773233  483626 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:21:54.788543  483626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45147
I0127 12:21:54.789039  483626 main.go:141] libmachine: () Calling .GetVersion
I0127 12:21:54.789627  483626 main.go:141] libmachine: Using API Version  1
I0127 12:21:54.789660  483626 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:21:54.790024  483626 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:21:54.790216  483626 main.go:141] libmachine: (functional-293873) Calling .DriverName
I0127 12:21:54.790417  483626 ssh_runner.go:195] Run: systemctl --version
I0127 12:21:54.790450  483626 main.go:141] libmachine: (functional-293873) Calling .GetSSHHostname
I0127 12:21:54.793029  483626 main.go:141] libmachine: (functional-293873) DBG | domain functional-293873 has defined MAC address 52:54:00:59:3e:d4 in network mk-functional-293873
I0127 12:21:54.793410  483626 main.go:141] libmachine: (functional-293873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:3e:d4", ip: ""} in network mk-functional-293873: {Iface:virbr1 ExpiryTime:2025-01-27 13:18:31 +0000 UTC Type:0 Mac:52:54:00:59:3e:d4 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:functional-293873 Clientid:01:52:54:00:59:3e:d4}
I0127 12:21:54.793437  483626 main.go:141] libmachine: (functional-293873) DBG | domain functional-293873 has defined IP address 192.168.39.143 and MAC address 52:54:00:59:3e:d4 in network mk-functional-293873
I0127 12:21:54.793552  483626 main.go:141] libmachine: (functional-293873) Calling .GetSSHPort
I0127 12:21:54.793717  483626 main.go:141] libmachine: (functional-293873) Calling .GetSSHKeyPath
I0127 12:21:54.793864  483626 main.go:141] libmachine: (functional-293873) Calling .GetSSHUsername
I0127 12:21:54.794045  483626 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/functional-293873/id_rsa Username:docker}
I0127 12:21:54.878032  483626 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 12:21:54.913821  483626 main.go:141] libmachine: Making call to close driver server
I0127 12:21:54.913838  483626 main.go:141] libmachine: (functional-293873) Calling .Close
I0127 12:21:54.914152  483626 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:21:54.914187  483626 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:21:54.914202  483626 main.go:141] libmachine: Making call to close driver server
I0127 12:21:54.914200  483626 main.go:141] libmachine: (functional-293873) DBG | Closing plugin on server side
I0127 12:21:54.914218  483626 main.go:141] libmachine: (functional-293873) Calling .Close
I0127 12:21:54.914537  483626 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:21:54.914554  483626 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:21:54.914555  483626 main.go:141] libmachine: (functional-293873) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-293873 image ls --format json --alsologtostderr:
[{"id":"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"26258470"},{"id":"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a","repoDigests":["registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"30908485"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:fdf3922689159cd8c1c0bca4a6d95b37737b307a17616c21edc0ecafab6c8ab6","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-293873"],"size":"992"},{"id":"sha256:9bea9f2796e23
6cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1","repoDigests":["docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a"],"repoTags":["docker.io/library/nginx:latest"],"size":"72080558"},{"id":"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"57680541"},{"id":"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"28671624"},{"id":"sha256:3bdf902f83d3f892f7e4dc7cd772075e26a6d7f643c8610ce53ba1be0dd91a57","repoDigests":[],"repoTags":["localhost/my-image:functional-293873"],"size":"774886"},{"id":"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["r
egistry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"18562039"},{"id":"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"20657536"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"320368"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6
dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-293873"],"size":"2372971"},{"id":"sha256:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e","repoDigests":["docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"38601118"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676c
ae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-293873 image ls --format json --alsologtostderr:
I0127 12:21:54.520824  483586 out.go:345] Setting OutFile to fd 1 ...
I0127 12:21:54.520989  483586 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:21:54.520999  483586 out.go:358] Setting ErrFile to fd 2...
I0127 12:21:54.521004  483586 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:21:54.521203  483586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-466901/.minikube/bin
I0127 12:21:54.521989  483586 config.go:182] Loaded profile config "functional-293873": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 12:21:54.522120  483586 config.go:182] Loaded profile config "functional-293873": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 12:21:54.522557  483586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:21:54.522656  483586 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:21:54.539704  483586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38367
I0127 12:21:54.540196  483586 main.go:141] libmachine: () Calling .GetVersion
I0127 12:21:54.540875  483586 main.go:141] libmachine: Using API Version  1
I0127 12:21:54.540904  483586 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:21:54.541416  483586 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:21:54.541667  483586 main.go:141] libmachine: (functional-293873) Calling .GetState
I0127 12:21:54.543686  483586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:21:54.543731  483586 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:21:54.559737  483586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37945
I0127 12:21:54.560254  483586 main.go:141] libmachine: () Calling .GetVersion
I0127 12:21:54.560877  483586 main.go:141] libmachine: Using API Version  1
I0127 12:21:54.560909  483586 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:21:54.561311  483586 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:21:54.561504  483586 main.go:141] libmachine: (functional-293873) Calling .DriverName
I0127 12:21:54.561720  483586 ssh_runner.go:195] Run: systemctl --version
I0127 12:21:54.561744  483586 main.go:141] libmachine: (functional-293873) Calling .GetSSHHostname
I0127 12:21:54.565121  483586 main.go:141] libmachine: (functional-293873) DBG | domain functional-293873 has defined MAC address 52:54:00:59:3e:d4 in network mk-functional-293873
I0127 12:21:54.565598  483586 main.go:141] libmachine: (functional-293873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:3e:d4", ip: ""} in network mk-functional-293873: {Iface:virbr1 ExpiryTime:2025-01-27 13:18:31 +0000 UTC Type:0 Mac:52:54:00:59:3e:d4 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:functional-293873 Clientid:01:52:54:00:59:3e:d4}
I0127 12:21:54.565636  483586 main.go:141] libmachine: (functional-293873) DBG | domain functional-293873 has defined IP address 192.168.39.143 and MAC address 52:54:00:59:3e:d4 in network mk-functional-293873
I0127 12:21:54.565833  483586 main.go:141] libmachine: (functional-293873) Calling .GetSSHPort
I0127 12:21:54.566045  483586 main.go:141] libmachine: (functional-293873) Calling .GetSSHKeyPath
I0127 12:21:54.566218  483586 main.go:141] libmachine: (functional-293873) Calling .GetSSHUsername
I0127 12:21:54.566379  483586 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/functional-293873/id_rsa Username:docker}
I0127 12:21:54.650531  483586 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 12:21:54.693567  483586 main.go:141] libmachine: Making call to close driver server
I0127 12:21:54.693587  483586 main.go:141] libmachine: (functional-293873) Calling .Close
I0127 12:21:54.693905  483586 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:21:54.693936  483586 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:21:54.693938  483586 main.go:141] libmachine: (functional-293873) DBG | Closing plugin on server side
I0127 12:21:54.693947  483586 main.go:141] libmachine: Making call to close driver server
I0127 12:21:54.693955  483586 main.go:141] libmachine: (functional-293873) Calling .Close
I0127 12:21:54.694234  483586 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:21:54.694253  483586 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:21:54.694291  483586 main.go:141] libmachine: (functional-293873) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-293873 image ls --format yaml --alsologtostderr:
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-293873
size: "2372971"
- id: sha256:9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1
repoDigests:
- docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a
repoTags:
- docker.io/library/nginx:latest
size: "72080558"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "57680541"
- id: sha256:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e
repoDigests:
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "38601118"
- id: sha256:fdf3922689159cd8c1c0bca4a6d95b37737b307a17616c21edc0ecafab6c8ab6
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-293873
size: "992"
- id: sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "28671624"
- id: sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "320368"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "26258470"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "18562039"
- id: sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "30908485"
- id: sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "20657536"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-293873 image ls --format yaml --alsologtostderr:
I0127 12:21:50.041774  483486 out.go:345] Setting OutFile to fd 1 ...
I0127 12:21:50.042160  483486 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:21:50.042210  483486 out.go:358] Setting ErrFile to fd 2...
I0127 12:21:50.042229  483486 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:21:50.042619  483486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-466901/.minikube/bin
I0127 12:21:50.043325  483486 config.go:182] Loaded profile config "functional-293873": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 12:21:50.043432  483486 config.go:182] Loaded profile config "functional-293873": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 12:21:50.043785  483486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:21:50.043834  483486 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:21:50.058984  483486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36685
I0127 12:21:50.059503  483486 main.go:141] libmachine: () Calling .GetVersion
I0127 12:21:50.060196  483486 main.go:141] libmachine: Using API Version  1
I0127 12:21:50.060232  483486 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:21:50.060611  483486 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:21:50.060830  483486 main.go:141] libmachine: (functional-293873) Calling .GetState
I0127 12:21:50.062888  483486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:21:50.062933  483486 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:21:50.077879  483486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39275
I0127 12:21:50.078404  483486 main.go:141] libmachine: () Calling .GetVersion
I0127 12:21:50.078911  483486 main.go:141] libmachine: Using API Version  1
I0127 12:21:50.078937  483486 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:21:50.079278  483486 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:21:50.079460  483486 main.go:141] libmachine: (functional-293873) Calling .DriverName
I0127 12:21:50.079651  483486 ssh_runner.go:195] Run: systemctl --version
I0127 12:21:50.079683  483486 main.go:141] libmachine: (functional-293873) Calling .GetSSHHostname
I0127 12:21:50.082393  483486 main.go:141] libmachine: (functional-293873) DBG | domain functional-293873 has defined MAC address 52:54:00:59:3e:d4 in network mk-functional-293873
I0127 12:21:50.082791  483486 main.go:141] libmachine: (functional-293873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:3e:d4", ip: ""} in network mk-functional-293873: {Iface:virbr1 ExpiryTime:2025-01-27 13:18:31 +0000 UTC Type:0 Mac:52:54:00:59:3e:d4 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:functional-293873 Clientid:01:52:54:00:59:3e:d4}
I0127 12:21:50.082828  483486 main.go:141] libmachine: (functional-293873) DBG | domain functional-293873 has defined IP address 192.168.39.143 and MAC address 52:54:00:59:3e:d4 in network mk-functional-293873
I0127 12:21:50.083013  483486 main.go:141] libmachine: (functional-293873) Calling .GetSSHPort
I0127 12:21:50.083195  483486 main.go:141] libmachine: (functional-293873) Calling .GetSSHKeyPath
I0127 12:21:50.083387  483486 main.go:141] libmachine: (functional-293873) Calling .GetSSHUsername
I0127 12:21:50.083532  483486 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/functional-293873/id_rsa Username:docker}
I0127 12:21:50.167859  483486 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 12:21:50.263464  483486 main.go:141] libmachine: Making call to close driver server
I0127 12:21:50.263479  483486 main.go:141] libmachine: (functional-293873) Calling .Close
I0127 12:21:50.263814  483486 main.go:141] libmachine: (functional-293873) DBG | Closing plugin on server side
I0127 12:21:50.263814  483486 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:21:50.263855  483486 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:21:50.263870  483486 main.go:141] libmachine: Making call to close driver server
I0127 12:21:50.263882  483486 main.go:141] libmachine: (functional-293873) Calling .Close
I0127 12:21:50.264169  483486 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:21:50.264186  483486 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-293873 ssh pgrep buildkitd: exit status 1 (219.034733ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 image build -t localhost/my-image:functional-293873 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-293873 image build -t localhost/my-image:functional-293873 testdata/build --alsologtostderr: (3.743342123s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-293873 image build -t localhost/my-image:functional-293873 testdata/build --alsologtostderr:
I0127 12:21:50.537438  483539 out.go:345] Setting OutFile to fd 1 ...
I0127 12:21:50.537570  483539 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:21:50.537581  483539 out.go:358] Setting ErrFile to fd 2...
I0127 12:21:50.537585  483539 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:21:50.537805  483539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-466901/.minikube/bin
I0127 12:21:50.538395  483539 config.go:182] Loaded profile config "functional-293873": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 12:21:50.539006  483539 config.go:182] Loaded profile config "functional-293873": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 12:21:50.539406  483539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:21:50.539458  483539 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:21:50.554936  483539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42311
I0127 12:21:50.555449  483539 main.go:141] libmachine: () Calling .GetVersion
I0127 12:21:50.555969  483539 main.go:141] libmachine: Using API Version  1
I0127 12:21:50.555992  483539 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:21:50.556344  483539 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:21:50.556578  483539 main.go:141] libmachine: (functional-293873) Calling .GetState
I0127 12:21:50.558619  483539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:21:50.558674  483539 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:21:50.573891  483539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34741
I0127 12:21:50.574436  483539 main.go:141] libmachine: () Calling .GetVersion
I0127 12:21:50.574931  483539 main.go:141] libmachine: Using API Version  1
I0127 12:21:50.574960  483539 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:21:50.575328  483539 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:21:50.575538  483539 main.go:141] libmachine: (functional-293873) Calling .DriverName
I0127 12:21:50.575791  483539 ssh_runner.go:195] Run: systemctl --version
I0127 12:21:50.575821  483539 main.go:141] libmachine: (functional-293873) Calling .GetSSHHostname
I0127 12:21:50.578581  483539 main.go:141] libmachine: (functional-293873) DBG | domain functional-293873 has defined MAC address 52:54:00:59:3e:d4 in network mk-functional-293873
I0127 12:21:50.578947  483539 main.go:141] libmachine: (functional-293873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:3e:d4", ip: ""} in network mk-functional-293873: {Iface:virbr1 ExpiryTime:2025-01-27 13:18:31 +0000 UTC Type:0 Mac:52:54:00:59:3e:d4 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:functional-293873 Clientid:01:52:54:00:59:3e:d4}
I0127 12:21:50.578991  483539 main.go:141] libmachine: (functional-293873) DBG | domain functional-293873 has defined IP address 192.168.39.143 and MAC address 52:54:00:59:3e:d4 in network mk-functional-293873
I0127 12:21:50.579079  483539 main.go:141] libmachine: (functional-293873) Calling .GetSSHPort
I0127 12:21:50.579293  483539 main.go:141] libmachine: (functional-293873) Calling .GetSSHKeyPath
I0127 12:21:50.579438  483539 main.go:141] libmachine: (functional-293873) Calling .GetSSHUsername
I0127 12:21:50.579576  483539 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/functional-293873/id_rsa Username:docker}
I0127 12:21:50.678223  483539 build_images.go:161] Building image from path: /tmp/build.727813989.tar
I0127 12:21:50.678362  483539 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0127 12:21:50.691555  483539 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.727813989.tar
I0127 12:21:50.696747  483539 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.727813989.tar: stat -c "%s %y" /var/lib/minikube/build/build.727813989.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.727813989.tar': No such file or directory
I0127 12:21:50.696816  483539 ssh_runner.go:362] scp /tmp/build.727813989.tar --> /var/lib/minikube/build/build.727813989.tar (3072 bytes)
I0127 12:21:50.735418  483539 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.727813989
I0127 12:21:50.747465  483539 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.727813989 -xf /var/lib/minikube/build/build.727813989.tar
I0127 12:21:50.757338  483539 containerd.go:394] Building image: /var/lib/minikube/build/build.727813989
I0127 12:21:50.757435  483539 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.727813989 --local dockerfile=/var/lib/minikube/build/build.727813989 --output type=image,name=localhost/my-image:functional-293873
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B 0.0s done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 DONE 0.2s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.3s done
#8 exporting manifest sha256:d382f08c64389374ccb4b587a7eb6d4970ceb5137285ef78c80a61a0366a0413 0.0s done
#8 exporting config sha256:3bdf902f83d3f892f7e4dc7cd772075e26a6d7f643c8610ce53ba1be0dd91a57 0.0s done
#8 naming to localhost/my-image:functional-293873 done
#8 DONE 0.4s
I0127 12:21:54.163823  483539 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.727813989 --local dockerfile=/var/lib/minikube/build/build.727813989 --output type=image,name=localhost/my-image:functional-293873: (3.406350679s)
I0127 12:21:54.163915  483539 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.727813989
I0127 12:21:54.190193  483539 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.727813989.tar
I0127 12:21:54.224442  483539 build_images.go:217] Built localhost/my-image:functional-293873 from /tmp/build.727813989.tar
I0127 12:21:54.224486  483539 build_images.go:133] succeeded building to: functional-293873
I0127 12:21:54.224492  483539 build_images.go:134] failed building to: 
I0127 12:21:54.224554  483539 main.go:141] libmachine: Making call to close driver server
I0127 12:21:54.224573  483539 main.go:141] libmachine: (functional-293873) Calling .Close
I0127 12:21:54.224897  483539 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:21:54.224928  483539 main.go:141] libmachine: (functional-293873) DBG | Closing plugin on server side
I0127 12:21:54.224942  483539 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:21:54.224953  483539 main.go:141] libmachine: Making call to close driver server
I0127 12:21:54.224962  483539 main.go:141] libmachine: (functional-293873) Calling .Close
I0127 12:21:54.225227  483539 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:21:54.225243  483539 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:21:54.225263  483539 main.go:141] libmachine: (functional-293873) DBG | Closing plugin on server side
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.554258119s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-293873
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 image load --daemon kicbase/echo-server:functional-293873 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-293873 image load --daemon kicbase/echo-server:functional-293873 --alsologtostderr: (1.861413901s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 image load --daemon kicbase/echo-server:functional-293873 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p functional-293873 image load --daemon kicbase/echo-server:functional-293873 --alsologtostderr: (1.098590611s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-293873
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 image load --daemon kicbase/echo-server:functional-293873 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-amd64 -p functional-293873 image load --daemon kicbase/echo-server:functional-293873 --alsologtostderr: (1.547758888s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 image save kicbase/echo-server:functional-293873 /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 image rm kicbase/echo-server:functional-293873 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-293873
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-293873 image save --daemon kicbase/echo-server:functional-293873 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-293873
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.45s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-293873
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-293873
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-293873
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (202.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-738593 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0127 12:23:39.253960  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:24:06.963741  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-738593 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (3m21.680916303s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (202.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-738593 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-738593 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-738593 -- rollout status deployment/busybox: (2.772327259s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-738593 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-738593 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-738593 -- exec busybox-58667487b6-4gsgh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-738593 -- exec busybox-58667487b6-b5pd8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-738593 -- exec busybox-58667487b6-gwkft -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-738593 -- exec busybox-58667487b6-4gsgh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-738593 -- exec busybox-58667487b6-b5pd8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-738593 -- exec busybox-58667487b6-gwkft -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-738593 -- exec busybox-58667487b6-4gsgh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-738593 -- exec busybox-58667487b6-b5pd8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-738593 -- exec busybox-58667487b6-gwkft -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-738593 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-738593 -- exec busybox-58667487b6-4gsgh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-738593 -- exec busybox-58667487b6-4gsgh -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-738593 -- exec busybox-58667487b6-b5pd8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-738593 -- exec busybox-58667487b6-b5pd8 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-738593 -- exec busybox-58667487b6-gwkft -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-738593 -- exec busybox-58667487b6-gwkft -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-738593 -v=7 --alsologtostderr
E0127 12:26:24.096911  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/functional-293873/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:26:24.103429  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/functional-293873/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:26:24.114926  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/functional-293873/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:26:24.136446  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/functional-293873/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:26:24.177960  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/functional-293873/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:26:24.259551  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/functional-293873/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:26:24.421219  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/functional-293873/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:26:24.742811  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/functional-293873/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:26:25.384259  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/functional-293873/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:26:26.665678  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/functional-293873/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:26:29.227419  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/functional-293873/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-738593 -v=7 --alsologtostderr: (56.794553058s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 status -v=7 --alsologtostderr
E0127 12:26:34.348947  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/functional-293873/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (57.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-738593 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 cp testdata/cp-test.txt ha-738593:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 ssh -n ha-738593 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 cp ha-738593:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1778345860/001/cp-test_ha-738593.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 ssh -n ha-738593 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 cp ha-738593:/home/docker/cp-test.txt ha-738593-m02:/home/docker/cp-test_ha-738593_ha-738593-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 ssh -n ha-738593 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 ssh -n ha-738593-m02 "sudo cat /home/docker/cp-test_ha-738593_ha-738593-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 cp ha-738593:/home/docker/cp-test.txt ha-738593-m03:/home/docker/cp-test_ha-738593_ha-738593-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 ssh -n ha-738593 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 ssh -n ha-738593-m03 "sudo cat /home/docker/cp-test_ha-738593_ha-738593-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 cp ha-738593:/home/docker/cp-test.txt ha-738593-m04:/home/docker/cp-test_ha-738593_ha-738593-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 ssh -n ha-738593 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 ssh -n ha-738593-m04 "sudo cat /home/docker/cp-test_ha-738593_ha-738593-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 cp testdata/cp-test.txt ha-738593-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 ssh -n ha-738593-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 cp ha-738593-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1778345860/001/cp-test_ha-738593-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 ssh -n ha-738593-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 cp ha-738593-m02:/home/docker/cp-test.txt ha-738593:/home/docker/cp-test_ha-738593-m02_ha-738593.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 ssh -n ha-738593-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 ssh -n ha-738593 "sudo cat /home/docker/cp-test_ha-738593-m02_ha-738593.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 cp ha-738593-m02:/home/docker/cp-test.txt ha-738593-m03:/home/docker/cp-test_ha-738593-m02_ha-738593-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 ssh -n ha-738593-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 ssh -n ha-738593-m03 "sudo cat /home/docker/cp-test_ha-738593-m02_ha-738593-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 cp ha-738593-m02:/home/docker/cp-test.txt ha-738593-m04:/home/docker/cp-test_ha-738593-m02_ha-738593-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 ssh -n ha-738593-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 ssh -n ha-738593-m04 "sudo cat /home/docker/cp-test_ha-738593-m02_ha-738593-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 cp testdata/cp-test.txt ha-738593-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 ssh -n ha-738593-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 cp ha-738593-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1778345860/001/cp-test_ha-738593-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 ssh -n ha-738593-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 cp ha-738593-m03:/home/docker/cp-test.txt ha-738593:/home/docker/cp-test_ha-738593-m03_ha-738593.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 ssh -n ha-738593-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 ssh -n ha-738593 "sudo cat /home/docker/cp-test_ha-738593-m03_ha-738593.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 cp ha-738593-m03:/home/docker/cp-test.txt ha-738593-m02:/home/docker/cp-test_ha-738593-m03_ha-738593-m02.txt
E0127 12:26:44.591396  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/functional-293873/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 ssh -n ha-738593-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 ssh -n ha-738593-m02 "sudo cat /home/docker/cp-test_ha-738593-m03_ha-738593-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 cp ha-738593-m03:/home/docker/cp-test.txt ha-738593-m04:/home/docker/cp-test_ha-738593-m03_ha-738593-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 ssh -n ha-738593-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 ssh -n ha-738593-m04 "sudo cat /home/docker/cp-test_ha-738593-m03_ha-738593-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 cp testdata/cp-test.txt ha-738593-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 ssh -n ha-738593-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 cp ha-738593-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1778345860/001/cp-test_ha-738593-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 ssh -n ha-738593-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 cp ha-738593-m04:/home/docker/cp-test.txt ha-738593:/home/docker/cp-test_ha-738593-m04_ha-738593.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 ssh -n ha-738593-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 ssh -n ha-738593 "sudo cat /home/docker/cp-test_ha-738593-m04_ha-738593.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 cp ha-738593-m04:/home/docker/cp-test.txt ha-738593-m02:/home/docker/cp-test_ha-738593-m04_ha-738593-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 ssh -n ha-738593-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 ssh -n ha-738593-m02 "sudo cat /home/docker/cp-test_ha-738593-m04_ha-738593-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 cp ha-738593-m04:/home/docker/cp-test.txt ha-738593-m03:/home/docker/cp-test_ha-738593-m04_ha-738593-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 ssh -n ha-738593-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 ssh -n ha-738593-m03 "sudo cat /home/docker/cp-test_ha-738593-m04_ha-738593-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 node stop m02 -v=7 --alsologtostderr
E0127 12:27:05.072682  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/functional-293873/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:27:46.034496  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/functional-293873/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-738593 node stop m02 -v=7 --alsologtostderr: (1m31.020476957s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-738593 status -v=7 --alsologtostderr: exit status 7 (681.642489ms)

                                                
                                                
-- stdout --
	ha-738593
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-738593-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-738593-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-738593-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:28:20.404926  488283 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:28:20.405095  488283 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:28:20.405109  488283 out.go:358] Setting ErrFile to fd 2...
	I0127 12:28:20.405116  488283 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:28:20.405393  488283 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-466901/.minikube/bin
	I0127 12:28:20.405617  488283 out.go:352] Setting JSON to false
	I0127 12:28:20.405664  488283 mustload.go:65] Loading cluster: ha-738593
	I0127 12:28:20.405716  488283 notify.go:220] Checking for updates...
	I0127 12:28:20.406134  488283 config.go:182] Loaded profile config "ha-738593": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:28:20.406155  488283 status.go:174] checking status of ha-738593 ...
	I0127 12:28:20.406648  488283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:28:20.406733  488283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:28:20.425535  488283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35213
	I0127 12:28:20.426071  488283 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:28:20.426738  488283 main.go:141] libmachine: Using API Version  1
	I0127 12:28:20.426762  488283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:28:20.427172  488283 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:28:20.427415  488283 main.go:141] libmachine: (ha-738593) Calling .GetState
	I0127 12:28:20.429318  488283 status.go:371] ha-738593 host status = "Running" (err=<nil>)
	I0127 12:28:20.429344  488283 host.go:66] Checking if "ha-738593" exists ...
	I0127 12:28:20.429663  488283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:28:20.429710  488283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:28:20.444828  488283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45567
	I0127 12:28:20.445318  488283 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:28:20.445898  488283 main.go:141] libmachine: Using API Version  1
	I0127 12:28:20.445922  488283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:28:20.446258  488283 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:28:20.446465  488283 main.go:141] libmachine: (ha-738593) Calling .GetIP
	I0127 12:28:20.449539  488283 main.go:141] libmachine: (ha-738593) DBG | domain ha-738593 has defined MAC address 52:54:00:97:d8:ee in network mk-ha-738593
	I0127 12:28:20.450048  488283 main.go:141] libmachine: (ha-738593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:d8:ee", ip: ""} in network mk-ha-738593: {Iface:virbr1 ExpiryTime:2025-01-27 13:22:23 +0000 UTC Type:0 Mac:52:54:00:97:d8:ee Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-738593 Clientid:01:52:54:00:97:d8:ee}
	I0127 12:28:20.450074  488283 main.go:141] libmachine: (ha-738593) DBG | domain ha-738593 has defined IP address 192.168.39.113 and MAC address 52:54:00:97:d8:ee in network mk-ha-738593
	I0127 12:28:20.450246  488283 host.go:66] Checking if "ha-738593" exists ...
	I0127 12:28:20.450588  488283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:28:20.450652  488283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:28:20.466164  488283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39357
	I0127 12:28:20.466648  488283 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:28:20.467102  488283 main.go:141] libmachine: Using API Version  1
	I0127 12:28:20.467135  488283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:28:20.467480  488283 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:28:20.467671  488283 main.go:141] libmachine: (ha-738593) Calling .DriverName
	I0127 12:28:20.467890  488283 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 12:28:20.467914  488283 main.go:141] libmachine: (ha-738593) Calling .GetSSHHostname
	I0127 12:28:20.471004  488283 main.go:141] libmachine: (ha-738593) DBG | domain ha-738593 has defined MAC address 52:54:00:97:d8:ee in network mk-ha-738593
	I0127 12:28:20.471707  488283 main.go:141] libmachine: (ha-738593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:d8:ee", ip: ""} in network mk-ha-738593: {Iface:virbr1 ExpiryTime:2025-01-27 13:22:23 +0000 UTC Type:0 Mac:52:54:00:97:d8:ee Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-738593 Clientid:01:52:54:00:97:d8:ee}
	I0127 12:28:20.471744  488283 main.go:141] libmachine: (ha-738593) DBG | domain ha-738593 has defined IP address 192.168.39.113 and MAC address 52:54:00:97:d8:ee in network mk-ha-738593
	I0127 12:28:20.471852  488283 main.go:141] libmachine: (ha-738593) Calling .GetSSHPort
	I0127 12:28:20.472086  488283 main.go:141] libmachine: (ha-738593) Calling .GetSSHKeyPath
	I0127 12:28:20.472279  488283 main.go:141] libmachine: (ha-738593) Calling .GetSSHUsername
	I0127 12:28:20.472459  488283 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/ha-738593/id_rsa Username:docker}
	I0127 12:28:20.560774  488283 ssh_runner.go:195] Run: systemctl --version
	I0127 12:28:20.568134  488283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:28:20.585012  488283 kubeconfig.go:125] found "ha-738593" server: "https://192.168.39.254:8443"
	I0127 12:28:20.585052  488283 api_server.go:166] Checking apiserver status ...
	I0127 12:28:20.585092  488283 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:28:20.600378  488283 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1168/cgroup
	W0127 12:28:20.618184  488283 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1168/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 12:28:20.618242  488283 ssh_runner.go:195] Run: ls
	I0127 12:28:20.623387  488283 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0127 12:28:20.628199  488283 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0127 12:28:20.628229  488283 status.go:463] ha-738593 apiserver status = Running (err=<nil>)
	I0127 12:28:20.628243  488283 status.go:176] ha-738593 status: &{Name:ha-738593 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 12:28:20.628266  488283 status.go:174] checking status of ha-738593-m02 ...
	I0127 12:28:20.628585  488283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:28:20.628636  488283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:28:20.644864  488283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45951
	I0127 12:28:20.645389  488283 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:28:20.645942  488283 main.go:141] libmachine: Using API Version  1
	I0127 12:28:20.645962  488283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:28:20.646267  488283 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:28:20.646493  488283 main.go:141] libmachine: (ha-738593-m02) Calling .GetState
	I0127 12:28:20.648205  488283 status.go:371] ha-738593-m02 host status = "Stopped" (err=<nil>)
	I0127 12:28:20.648219  488283 status.go:384] host is not running, skipping remaining checks
	I0127 12:28:20.648224  488283 status.go:176] ha-738593-m02 status: &{Name:ha-738593-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 12:28:20.648246  488283 status.go:174] checking status of ha-738593-m03 ...
	I0127 12:28:20.648572  488283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:28:20.648620  488283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:28:20.664481  488283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41589
	I0127 12:28:20.664986  488283 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:28:20.665497  488283 main.go:141] libmachine: Using API Version  1
	I0127 12:28:20.665531  488283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:28:20.665830  488283 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:28:20.666034  488283 main.go:141] libmachine: (ha-738593-m03) Calling .GetState
	I0127 12:28:20.667622  488283 status.go:371] ha-738593-m03 host status = "Running" (err=<nil>)
	I0127 12:28:20.667642  488283 host.go:66] Checking if "ha-738593-m03" exists ...
	I0127 12:28:20.667985  488283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:28:20.668026  488283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:28:20.683666  488283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43287
	I0127 12:28:20.684184  488283 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:28:20.684725  488283 main.go:141] libmachine: Using API Version  1
	I0127 12:28:20.684758  488283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:28:20.685163  488283 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:28:20.685385  488283 main.go:141] libmachine: (ha-738593-m03) Calling .GetIP
	I0127 12:28:20.688558  488283 main.go:141] libmachine: (ha-738593-m03) DBG | domain ha-738593-m03 has defined MAC address 52:54:00:8a:34:cf in network mk-ha-738593
	I0127 12:28:20.689071  488283 main.go:141] libmachine: (ha-738593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:34:cf", ip: ""} in network mk-ha-738593: {Iface:virbr1 ExpiryTime:2025-01-27 13:24:29 +0000 UTC Type:0 Mac:52:54:00:8a:34:cf Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-738593-m03 Clientid:01:52:54:00:8a:34:cf}
	I0127 12:28:20.689105  488283 main.go:141] libmachine: (ha-738593-m03) DBG | domain ha-738593-m03 has defined IP address 192.168.39.41 and MAC address 52:54:00:8a:34:cf in network mk-ha-738593
	I0127 12:28:20.689309  488283 host.go:66] Checking if "ha-738593-m03" exists ...
	I0127 12:28:20.689655  488283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:28:20.689698  488283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:28:20.705643  488283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36097
	I0127 12:28:20.706099  488283 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:28:20.706586  488283 main.go:141] libmachine: Using API Version  1
	I0127 12:28:20.706607  488283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:28:20.706969  488283 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:28:20.707189  488283 main.go:141] libmachine: (ha-738593-m03) Calling .DriverName
	I0127 12:28:20.707428  488283 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 12:28:20.707471  488283 main.go:141] libmachine: (ha-738593-m03) Calling .GetSSHHostname
	I0127 12:28:20.710603  488283 main.go:141] libmachine: (ha-738593-m03) DBG | domain ha-738593-m03 has defined MAC address 52:54:00:8a:34:cf in network mk-ha-738593
	I0127 12:28:20.711090  488283 main.go:141] libmachine: (ha-738593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:34:cf", ip: ""} in network mk-ha-738593: {Iface:virbr1 ExpiryTime:2025-01-27 13:24:29 +0000 UTC Type:0 Mac:52:54:00:8a:34:cf Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-738593-m03 Clientid:01:52:54:00:8a:34:cf}
	I0127 12:28:20.711118  488283 main.go:141] libmachine: (ha-738593-m03) DBG | domain ha-738593-m03 has defined IP address 192.168.39.41 and MAC address 52:54:00:8a:34:cf in network mk-ha-738593
	I0127 12:28:20.711319  488283 main.go:141] libmachine: (ha-738593-m03) Calling .GetSSHPort
	I0127 12:28:20.711547  488283 main.go:141] libmachine: (ha-738593-m03) Calling .GetSSHKeyPath
	I0127 12:28:20.711690  488283 main.go:141] libmachine: (ha-738593-m03) Calling .GetSSHUsername
	I0127 12:28:20.711817  488283 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/ha-738593-m03/id_rsa Username:docker}
	I0127 12:28:20.799897  488283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:28:20.817624  488283 kubeconfig.go:125] found "ha-738593" server: "https://192.168.39.254:8443"
	I0127 12:28:20.817664  488283 api_server.go:166] Checking apiserver status ...
	I0127 12:28:20.817711  488283 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:28:20.837183  488283 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1182/cgroup
	W0127 12:28:20.847367  488283 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1182/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 12:28:20.847440  488283 ssh_runner.go:195] Run: ls
	I0127 12:28:20.852525  488283 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0127 12:28:20.857490  488283 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0127 12:28:20.857518  488283 status.go:463] ha-738593-m03 apiserver status = Running (err=<nil>)
	I0127 12:28:20.857527  488283 status.go:176] ha-738593-m03 status: &{Name:ha-738593-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 12:28:20.857555  488283 status.go:174] checking status of ha-738593-m04 ...
	I0127 12:28:20.857976  488283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:28:20.858028  488283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:28:20.874186  488283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44217
	I0127 12:28:20.874747  488283 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:28:20.875325  488283 main.go:141] libmachine: Using API Version  1
	I0127 12:28:20.875352  488283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:28:20.875677  488283 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:28:20.875855  488283 main.go:141] libmachine: (ha-738593-m04) Calling .GetState
	I0127 12:28:20.877419  488283 status.go:371] ha-738593-m04 host status = "Running" (err=<nil>)
	I0127 12:28:20.877434  488283 host.go:66] Checking if "ha-738593-m04" exists ...
	I0127 12:28:20.877722  488283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:28:20.877763  488283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:28:20.894147  488283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43473
	I0127 12:28:20.894556  488283 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:28:20.895067  488283 main.go:141] libmachine: Using API Version  1
	I0127 12:28:20.895092  488283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:28:20.895431  488283 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:28:20.895641  488283 main.go:141] libmachine: (ha-738593-m04) Calling .GetIP
	I0127 12:28:20.898623  488283 main.go:141] libmachine: (ha-738593-m04) DBG | domain ha-738593-m04 has defined MAC address 52:54:00:d1:0f:24 in network mk-ha-738593
	I0127 12:28:20.899030  488283 main.go:141] libmachine: (ha-738593-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:0f:24", ip: ""} in network mk-ha-738593: {Iface:virbr1 ExpiryTime:2025-01-27 13:25:53 +0000 UTC Type:0 Mac:52:54:00:d1:0f:24 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:ha-738593-m04 Clientid:01:52:54:00:d1:0f:24}
	I0127 12:28:20.899050  488283 main.go:141] libmachine: (ha-738593-m04) DBG | domain ha-738593-m04 has defined IP address 192.168.39.34 and MAC address 52:54:00:d1:0f:24 in network mk-ha-738593
	I0127 12:28:20.899197  488283 host.go:66] Checking if "ha-738593-m04" exists ...
	I0127 12:28:20.899667  488283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:28:20.899719  488283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:28:20.915531  488283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35427
	I0127 12:28:20.916014  488283 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:28:20.916555  488283 main.go:141] libmachine: Using API Version  1
	I0127 12:28:20.916584  488283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:28:20.916982  488283 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:28:20.917227  488283 main.go:141] libmachine: (ha-738593-m04) Calling .DriverName
	I0127 12:28:20.917430  488283 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 12:28:20.917460  488283 main.go:141] libmachine: (ha-738593-m04) Calling .GetSSHHostname
	I0127 12:28:20.920550  488283 main.go:141] libmachine: (ha-738593-m04) DBG | domain ha-738593-m04 has defined MAC address 52:54:00:d1:0f:24 in network mk-ha-738593
	I0127 12:28:20.920984  488283 main.go:141] libmachine: (ha-738593-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:0f:24", ip: ""} in network mk-ha-738593: {Iface:virbr1 ExpiryTime:2025-01-27 13:25:53 +0000 UTC Type:0 Mac:52:54:00:d1:0f:24 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:ha-738593-m04 Clientid:01:52:54:00:d1:0f:24}
	I0127 12:28:20.921019  488283 main.go:141] libmachine: (ha-738593-m04) DBG | domain ha-738593-m04 has defined IP address 192.168.39.34 and MAC address 52:54:00:d1:0f:24 in network mk-ha-738593
	I0127 12:28:20.921169  488283 main.go:141] libmachine: (ha-738593-m04) Calling .GetSSHPort
	I0127 12:28:20.921372  488283 main.go:141] libmachine: (ha-738593-m04) Calling .GetSSHKeyPath
	I0127 12:28:20.921530  488283 main.go:141] libmachine: (ha-738593-m04) Calling .GetSSHUsername
	I0127 12:28:20.921722  488283 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/ha-738593-m04/id_rsa Username:docker}
	I0127 12:28:21.013545  488283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:28:21.033926  488283 status.go:176] ha-738593-m04 status: &{Name:ha-738593-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (41.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 node start m02 -v=7 --alsologtostderr
E0127 12:28:39.253002  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-738593 node start m02 -v=7 --alsologtostderr: (40.14467656s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (41.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (474.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-738593 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-738593 -v=7 --alsologtostderr
E0127 12:29:07.955962  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/functional-293873/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:24.097613  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/functional-293873/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:51.797943  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/functional-293873/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-738593 -v=7 --alsologtostderr: (4m33.690998452s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-738593 --wait=true -v=7 --alsologtostderr
E0127 12:33:39.253378  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:35:02.326187  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:36:24.097008  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/functional-293873/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-738593 --wait=true -v=7 --alsologtostderr: (3m20.841978866s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-738593
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (474.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-738593 node delete m03 -v=7 --alsologtostderr: (6.345751042s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 stop -v=7 --alsologtostderr
E0127 12:38:39.253260  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:41:24.097462  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/functional-293873/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-738593 stop -v=7 --alsologtostderr: (4m32.871620317s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-738593 status -v=7 --alsologtostderr: exit status 7 (115.25504ms)

                                                
                                                
-- stdout --
	ha-738593
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-738593-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-738593-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:41:39.022587  492297 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:41:39.022718  492297 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:41:39.022729  492297 out.go:358] Setting ErrFile to fd 2...
	I0127 12:41:39.022736  492297 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:41:39.022960  492297 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-466901/.minikube/bin
	I0127 12:41:39.023168  492297 out.go:352] Setting JSON to false
	I0127 12:41:39.023210  492297 mustload.go:65] Loading cluster: ha-738593
	I0127 12:41:39.023340  492297 notify.go:220] Checking for updates...
	I0127 12:41:39.023710  492297 config.go:182] Loaded profile config "ha-738593": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:41:39.023736  492297 status.go:174] checking status of ha-738593 ...
	I0127 12:41:39.024175  492297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:41:39.024231  492297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:41:39.045335  492297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45669
	I0127 12:41:39.045812  492297 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:41:39.046378  492297 main.go:141] libmachine: Using API Version  1
	I0127 12:41:39.046400  492297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:41:39.046865  492297 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:41:39.047082  492297 main.go:141] libmachine: (ha-738593) Calling .GetState
	I0127 12:41:39.048826  492297 status.go:371] ha-738593 host status = "Stopped" (err=<nil>)
	I0127 12:41:39.048845  492297 status.go:384] host is not running, skipping remaining checks
	I0127 12:41:39.048852  492297 status.go:176] ha-738593 status: &{Name:ha-738593 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 12:41:39.048898  492297 status.go:174] checking status of ha-738593-m02 ...
	I0127 12:41:39.049192  492297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:41:39.049229  492297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:41:39.064395  492297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38797
	I0127 12:41:39.064785  492297 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:41:39.065272  492297 main.go:141] libmachine: Using API Version  1
	I0127 12:41:39.065295  492297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:41:39.065631  492297 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:41:39.065833  492297 main.go:141] libmachine: (ha-738593-m02) Calling .GetState
	I0127 12:41:39.067446  492297 status.go:371] ha-738593-m02 host status = "Stopped" (err=<nil>)
	I0127 12:41:39.067466  492297 status.go:384] host is not running, skipping remaining checks
	I0127 12:41:39.067473  492297 status.go:176] ha-738593-m02 status: &{Name:ha-738593-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 12:41:39.067499  492297 status.go:174] checking status of ha-738593-m04 ...
	I0127 12:41:39.067813  492297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:41:39.067863  492297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:41:39.083740  492297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40991
	I0127 12:41:39.084211  492297 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:41:39.084729  492297 main.go:141] libmachine: Using API Version  1
	I0127 12:41:39.084751  492297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:41:39.085108  492297 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:41:39.085319  492297 main.go:141] libmachine: (ha-738593-m04) Calling .GetState
	I0127 12:41:39.086958  492297 status.go:371] ha-738593-m04 host status = "Stopped" (err=<nil>)
	I0127 12:41:39.086975  492297 status.go:384] host is not running, skipping remaining checks
	I0127 12:41:39.086982  492297 status.go:176] ha-738593-m04 status: &{Name:ha-738593-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (124.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-738593 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0127 12:42:47.160200  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/functional-293873/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:43:39.253701  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-738593 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m3.927481745s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (124.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (72.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-738593 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-738593 --control-plane -v=7 --alsologtostderr: (1m11.382845606s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-738593 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (72.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.90s)

                                                
                                    
x
+
TestJSONOutput/start/Command (84.16s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-848433 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
E0127 12:46:24.100200  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/functional-293873/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-848433 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m24.16045137s)
--- PASS: TestJSONOutput/start/Command (84.16s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-848433 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-848433 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.64s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-848433 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-848433 --output=json --user=testUser: (6.639598417s)
--- PASS: TestJSONOutput/stop/Command (6.64s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-504844 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-504844 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (73.280847ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"eac64230-e8c8-40c3-bf86-4ba7dcdddac9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-504844] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e3feed76-3fba-4396-b024-0e0bbb6e4e2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20317"}}
	{"specversion":"1.0","id":"825fd002-9d70-4d0f-a2de-b6b6b567863a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"818b0c0b-9327-45fa-9483-d2b78f279962","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20317-466901/kubeconfig"}}
	{"specversion":"1.0","id":"a7c4fc89-fd22-4cb8-8049-fd23829223bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-466901/.minikube"}}
	{"specversion":"1.0","id":"edf654bc-25d3-483e-a231-ae05c4d3a4c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e146c521-14c0-4338-a2b8-17b705866b99","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7bc40f8d-d114-415b-8cb6-6d76ca2571c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-504844" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-504844
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (89.1s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-654284 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-654284 --driver=kvm2  --container-runtime=containerd: (41.823317395s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-666907 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-666907 --driver=kvm2  --container-runtime=containerd: (44.329642044s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-654284
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-666907
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-666907" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-666907
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-666907: (1.027608884s)
helpers_test.go:175: Cleaning up "first-654284" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-654284
--- PASS: TestMinikubeProfile (89.10s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.38s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-409651 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-409651 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.381568173s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-409651 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-409651 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.28s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-430018 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0127 12:48:39.257439  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-430018 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (28.279302055s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-430018 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-430018 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-409651 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-430018 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-430018 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-430018
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-430018: (1.364953485s)
--- PASS: TestMountStart/serial/Stop (1.37s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.14s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-430018
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-430018: (22.137298214s)
--- PASS: TestMountStart/serial/RestartStopped (23.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-430018 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-430018 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (111.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-504309 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-504309 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m51.512267117s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (111.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-504309 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-504309 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-504309 -- rollout status deployment/busybox: (2.406845986s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-504309 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-504309 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-504309 -- exec busybox-58667487b6-dbkfd -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-504309 -- exec busybox-58667487b6-hkp72 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-504309 -- exec busybox-58667487b6-dbkfd -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-504309 -- exec busybox-58667487b6-hkp72 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-504309 -- exec busybox-58667487b6-dbkfd -- nslookup kubernetes.default.svc.cluster.local
E0127 12:51:24.097606  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/functional-293873/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-504309 -- exec busybox-58667487b6-hkp72 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.00s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-504309 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-504309 -- exec busybox-58667487b6-dbkfd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-504309 -- exec busybox-58667487b6-dbkfd -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-504309 -- exec busybox-58667487b6-hkp72 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-504309 -- exec busybox-58667487b6-hkp72 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.80s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-504309 -v 3 --alsologtostderr
E0127 12:51:42.328955  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-504309 -v 3 --alsologtostderr: (50.222993098s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.84s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-504309 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.60s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 cp testdata/cp-test.txt multinode-504309:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 ssh -n multinode-504309 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 cp multinode-504309:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2662934599/001/cp-test_multinode-504309.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 ssh -n multinode-504309 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 cp multinode-504309:/home/docker/cp-test.txt multinode-504309-m02:/home/docker/cp-test_multinode-504309_multinode-504309-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 ssh -n multinode-504309 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 ssh -n multinode-504309-m02 "sudo cat /home/docker/cp-test_multinode-504309_multinode-504309-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 cp multinode-504309:/home/docker/cp-test.txt multinode-504309-m03:/home/docker/cp-test_multinode-504309_multinode-504309-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 ssh -n multinode-504309 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 ssh -n multinode-504309-m03 "sudo cat /home/docker/cp-test_multinode-504309_multinode-504309-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 cp testdata/cp-test.txt multinode-504309-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 ssh -n multinode-504309-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 cp multinode-504309-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2662934599/001/cp-test_multinode-504309-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 ssh -n multinode-504309-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 cp multinode-504309-m02:/home/docker/cp-test.txt multinode-504309:/home/docker/cp-test_multinode-504309-m02_multinode-504309.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 ssh -n multinode-504309-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 ssh -n multinode-504309 "sudo cat /home/docker/cp-test_multinode-504309-m02_multinode-504309.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 cp multinode-504309-m02:/home/docker/cp-test.txt multinode-504309-m03:/home/docker/cp-test_multinode-504309-m02_multinode-504309-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 ssh -n multinode-504309-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 ssh -n multinode-504309-m03 "sudo cat /home/docker/cp-test_multinode-504309-m02_multinode-504309-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 cp testdata/cp-test.txt multinode-504309-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 ssh -n multinode-504309-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 cp multinode-504309-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2662934599/001/cp-test_multinode-504309-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 ssh -n multinode-504309-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 cp multinode-504309-m03:/home/docker/cp-test.txt multinode-504309:/home/docker/cp-test_multinode-504309-m03_multinode-504309.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 ssh -n multinode-504309-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 ssh -n multinode-504309 "sudo cat /home/docker/cp-test_multinode-504309-m03_multinode-504309.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 cp multinode-504309-m03:/home/docker/cp-test.txt multinode-504309-m02:/home/docker/cp-test_multinode-504309-m03_multinode-504309-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 ssh -n multinode-504309-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 ssh -n multinode-504309-m02 "sudo cat /home/docker/cp-test_multinode-504309-m03_multinode-504309-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.59s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-504309 node stop m03: (1.438521041s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-504309 status: exit status 7 (446.721851ms)

                                                
                                                
-- stdout --
	multinode-504309
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-504309-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-504309-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-504309 status --alsologtostderr: exit status 7 (460.137131ms)

                                                
                                                
-- stdout --
	multinode-504309
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-504309-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-504309-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:52:26.225360  499955 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:52:26.225658  499955 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:52:26.225670  499955 out.go:358] Setting ErrFile to fd 2...
	I0127 12:52:26.225675  499955 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:52:26.225849  499955 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-466901/.minikube/bin
	I0127 12:52:26.226024  499955 out.go:352] Setting JSON to false
	I0127 12:52:26.226052  499955 mustload.go:65] Loading cluster: multinode-504309
	I0127 12:52:26.226161  499955 notify.go:220] Checking for updates...
	I0127 12:52:26.226443  499955 config.go:182] Loaded profile config "multinode-504309": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:52:26.226463  499955 status.go:174] checking status of multinode-504309 ...
	I0127 12:52:26.226855  499955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:52:26.226897  499955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:52:26.250913  499955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41687
	I0127 12:52:26.251433  499955 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:52:26.252060  499955 main.go:141] libmachine: Using API Version  1
	I0127 12:52:26.252109  499955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:52:26.252437  499955 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:52:26.252684  499955 main.go:141] libmachine: (multinode-504309) Calling .GetState
	I0127 12:52:26.254245  499955 status.go:371] multinode-504309 host status = "Running" (err=<nil>)
	I0127 12:52:26.254264  499955 host.go:66] Checking if "multinode-504309" exists ...
	I0127 12:52:26.254549  499955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:52:26.254598  499955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:52:26.270828  499955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39429
	I0127 12:52:26.271338  499955 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:52:26.271826  499955 main.go:141] libmachine: Using API Version  1
	I0127 12:52:26.271843  499955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:52:26.272185  499955 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:52:26.272364  499955 main.go:141] libmachine: (multinode-504309) Calling .GetIP
	I0127 12:52:26.275134  499955 main.go:141] libmachine: (multinode-504309) DBG | domain multinode-504309 has defined MAC address 52:54:00:cb:77:e2 in network mk-multinode-504309
	I0127 12:52:26.275579  499955 main.go:141] libmachine: (multinode-504309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:77:e2", ip: ""} in network mk-multinode-504309: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:44 +0000 UTC Type:0 Mac:52:54:00:cb:77:e2 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:multinode-504309 Clientid:01:52:54:00:cb:77:e2}
	I0127 12:52:26.275612  499955 main.go:141] libmachine: (multinode-504309) DBG | domain multinode-504309 has defined IP address 192.168.39.250 and MAC address 52:54:00:cb:77:e2 in network mk-multinode-504309
	I0127 12:52:26.275701  499955 host.go:66] Checking if "multinode-504309" exists ...
	I0127 12:52:26.276069  499955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:52:26.276106  499955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:52:26.295971  499955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35529
	I0127 12:52:26.296371  499955 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:52:26.296954  499955 main.go:141] libmachine: Using API Version  1
	I0127 12:52:26.296981  499955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:52:26.297335  499955 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:52:26.297545  499955 main.go:141] libmachine: (multinode-504309) Calling .DriverName
	I0127 12:52:26.297723  499955 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 12:52:26.297744  499955 main.go:141] libmachine: (multinode-504309) Calling .GetSSHHostname
	I0127 12:52:26.300792  499955 main.go:141] libmachine: (multinode-504309) DBG | domain multinode-504309 has defined MAC address 52:54:00:cb:77:e2 in network mk-multinode-504309
	I0127 12:52:26.301241  499955 main.go:141] libmachine: (multinode-504309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:77:e2", ip: ""} in network mk-multinode-504309: {Iface:virbr1 ExpiryTime:2025-01-27 13:49:44 +0000 UTC Type:0 Mac:52:54:00:cb:77:e2 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:multinode-504309 Clientid:01:52:54:00:cb:77:e2}
	I0127 12:52:26.301267  499955 main.go:141] libmachine: (multinode-504309) DBG | domain multinode-504309 has defined IP address 192.168.39.250 and MAC address 52:54:00:cb:77:e2 in network mk-multinode-504309
	I0127 12:52:26.301431  499955 main.go:141] libmachine: (multinode-504309) Calling .GetSSHPort
	I0127 12:52:26.301657  499955 main.go:141] libmachine: (multinode-504309) Calling .GetSSHKeyPath
	I0127 12:52:26.301791  499955 main.go:141] libmachine: (multinode-504309) Calling .GetSSHUsername
	I0127 12:52:26.301944  499955 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/multinode-504309/id_rsa Username:docker}
	I0127 12:52:26.386740  499955 ssh_runner.go:195] Run: systemctl --version
	I0127 12:52:26.393671  499955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:52:26.411255  499955 kubeconfig.go:125] found "multinode-504309" server: "https://192.168.39.250:8443"
	I0127 12:52:26.411319  499955 api_server.go:166] Checking apiserver status ...
	I0127 12:52:26.411371  499955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:52:26.436115  499955 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1143/cgroup
	W0127 12:52:26.446460  499955 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1143/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 12:52:26.446528  499955 ssh_runner.go:195] Run: ls
	I0127 12:52:26.451868  499955 api_server.go:253] Checking apiserver healthz at https://192.168.39.250:8443/healthz ...
	I0127 12:52:26.456787  499955 api_server.go:279] https://192.168.39.250:8443/healthz returned 200:
	ok
	I0127 12:52:26.456824  499955 status.go:463] multinode-504309 apiserver status = Running (err=<nil>)
	I0127 12:52:26.456848  499955 status.go:176] multinode-504309 status: &{Name:multinode-504309 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 12:52:26.456865  499955 status.go:174] checking status of multinode-504309-m02 ...
	I0127 12:52:26.457163  499955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:52:26.457203  499955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:52:26.472870  499955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44411
	I0127 12:52:26.473331  499955 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:52:26.473829  499955 main.go:141] libmachine: Using API Version  1
	I0127 12:52:26.473877  499955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:52:26.474246  499955 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:52:26.474455  499955 main.go:141] libmachine: (multinode-504309-m02) Calling .GetState
	I0127 12:52:26.476205  499955 status.go:371] multinode-504309-m02 host status = "Running" (err=<nil>)
	I0127 12:52:26.476223  499955 host.go:66] Checking if "multinode-504309-m02" exists ...
	I0127 12:52:26.476506  499955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:52:26.476548  499955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:52:26.492561  499955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45181
	I0127 12:52:26.493108  499955 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:52:26.493590  499955 main.go:141] libmachine: Using API Version  1
	I0127 12:52:26.493625  499955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:52:26.494014  499955 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:52:26.494246  499955 main.go:141] libmachine: (multinode-504309-m02) Calling .GetIP
	I0127 12:52:26.497284  499955 main.go:141] libmachine: (multinode-504309-m02) DBG | domain multinode-504309-m02 has defined MAC address 52:54:00:42:a2:97 in network mk-multinode-504309
	I0127 12:52:26.497718  499955 main.go:141] libmachine: (multinode-504309-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:a2:97", ip: ""} in network mk-multinode-504309: {Iface:virbr1 ExpiryTime:2025-01-27 13:50:47 +0000 UTC Type:0 Mac:52:54:00:42:a2:97 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-504309-m02 Clientid:01:52:54:00:42:a2:97}
	I0127 12:52:26.497740  499955 main.go:141] libmachine: (multinode-504309-m02) DBG | domain multinode-504309-m02 has defined IP address 192.168.39.107 and MAC address 52:54:00:42:a2:97 in network mk-multinode-504309
	I0127 12:52:26.497909  499955 host.go:66] Checking if "multinode-504309-m02" exists ...
	I0127 12:52:26.498226  499955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:52:26.498275  499955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:52:26.514174  499955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42069
	I0127 12:52:26.514606  499955 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:52:26.515034  499955 main.go:141] libmachine: Using API Version  1
	I0127 12:52:26.515055  499955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:52:26.515359  499955 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:52:26.515541  499955 main.go:141] libmachine: (multinode-504309-m02) Calling .DriverName
	I0127 12:52:26.515737  499955 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 12:52:26.515762  499955 main.go:141] libmachine: (multinode-504309-m02) Calling .GetSSHHostname
	I0127 12:52:26.518426  499955 main.go:141] libmachine: (multinode-504309-m02) DBG | domain multinode-504309-m02 has defined MAC address 52:54:00:42:a2:97 in network mk-multinode-504309
	I0127 12:52:26.518891  499955 main.go:141] libmachine: (multinode-504309-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:a2:97", ip: ""} in network mk-multinode-504309: {Iface:virbr1 ExpiryTime:2025-01-27 13:50:47 +0000 UTC Type:0 Mac:52:54:00:42:a2:97 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-504309-m02 Clientid:01:52:54:00:42:a2:97}
	I0127 12:52:26.518921  499955 main.go:141] libmachine: (multinode-504309-m02) DBG | domain multinode-504309-m02 has defined IP address 192.168.39.107 and MAC address 52:54:00:42:a2:97 in network mk-multinode-504309
	I0127 12:52:26.519011  499955 main.go:141] libmachine: (multinode-504309-m02) Calling .GetSSHPort
	I0127 12:52:26.519225  499955 main.go:141] libmachine: (multinode-504309-m02) Calling .GetSSHKeyPath
	I0127 12:52:26.519404  499955 main.go:141] libmachine: (multinode-504309-m02) Calling .GetSSHUsername
	I0127 12:52:26.519550  499955 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/multinode-504309-m02/id_rsa Username:docker}
	I0127 12:52:26.602527  499955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:52:26.617628  499955 status.go:176] multinode-504309-m02 status: &{Name:multinode-504309-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0127 12:52:26.617670  499955 status.go:174] checking status of multinode-504309-m03 ...
	I0127 12:52:26.618054  499955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:52:26.618110  499955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:52:26.633949  499955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34943
	I0127 12:52:26.634476  499955 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:52:26.635061  499955 main.go:141] libmachine: Using API Version  1
	I0127 12:52:26.635085  499955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:52:26.635439  499955 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:52:26.635665  499955 main.go:141] libmachine: (multinode-504309-m03) Calling .GetState
	I0127 12:52:26.637201  499955 status.go:371] multinode-504309-m03 host status = "Stopped" (err=<nil>)
	I0127 12:52:26.637216  499955 status.go:384] host is not running, skipping remaining checks
	I0127 12:52:26.637224  499955 status.go:176] multinode-504309-m03 status: &{Name:multinode-504309-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.35s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (36.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-504309 node start m03 -v=7 --alsologtostderr: (35.710555449s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (36.36s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (314.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-504309
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-504309
E0127 12:53:39.257684  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-504309: (3m3.053084445s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-504309 --wait=true -v=8 --alsologtostderr
E0127 12:56:24.098239  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/functional-293873/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-504309 --wait=true -v=8 --alsologtostderr: (2m11.050033293s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-504309
--- PASS: TestMultiNode/serial/RestartKeepsNodes (314.21s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-504309 node delete m03: (1.719503115s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (181.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 stop
E0127 12:58:39.257199  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:59:27.164208  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/functional-293873/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-504309 stop: (3m1.702871421s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-504309 status: exit status 7 (91.832238ms)

                                                
                                                
-- stdout --
	multinode-504309
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-504309-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-504309 status --alsologtostderr: exit status 7 (87.62461ms)

                                                
                                                
-- stdout --
	multinode-504309
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-504309-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 13:01:21.297743  502692 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:01:21.297855  502692 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:01:21.297863  502692 out.go:358] Setting ErrFile to fd 2...
	I0127 13:01:21.297868  502692 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:01:21.298086  502692 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-466901/.minikube/bin
	I0127 13:01:21.298267  502692 out.go:352] Setting JSON to false
	I0127 13:01:21.298300  502692 mustload.go:65] Loading cluster: multinode-504309
	I0127 13:01:21.298411  502692 notify.go:220] Checking for updates...
	I0127 13:01:21.298760  502692 config.go:182] Loaded profile config "multinode-504309": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 13:01:21.298782  502692 status.go:174] checking status of multinode-504309 ...
	I0127 13:01:21.299346  502692 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:01:21.299389  502692 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:01:21.314996  502692 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38609
	I0127 13:01:21.315576  502692 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:01:21.316364  502692 main.go:141] libmachine: Using API Version  1
	I0127 13:01:21.316399  502692 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:01:21.316776  502692 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:01:21.316973  502692 main.go:141] libmachine: (multinode-504309) Calling .GetState
	I0127 13:01:21.318636  502692 status.go:371] multinode-504309 host status = "Stopped" (err=<nil>)
	I0127 13:01:21.318656  502692 status.go:384] host is not running, skipping remaining checks
	I0127 13:01:21.318663  502692 status.go:176] multinode-504309 status: &{Name:multinode-504309 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 13:01:21.318684  502692 status.go:174] checking status of multinode-504309-m02 ...
	I0127 13:01:21.318980  502692 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:01:21.319027  502692 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:01:21.334155  502692 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46109
	I0127 13:01:21.334622  502692 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:01:21.335154  502692 main.go:141] libmachine: Using API Version  1
	I0127 13:01:21.335182  502692 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:01:21.335528  502692 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:01:21.335754  502692 main.go:141] libmachine: (multinode-504309-m02) Calling .GetState
	I0127 13:01:21.337226  502692 status.go:371] multinode-504309-m02 host status = "Stopped" (err=<nil>)
	I0127 13:01:21.337239  502692 status.go:384] host is not running, skipping remaining checks
	I0127 13:01:21.337245  502692 status.go:176] multinode-504309-m02 status: &{Name:multinode-504309-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (181.88s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (93.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-504309 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0127 13:01:24.097396  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/functional-293873/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-504309 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m33.229884047s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-504309 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (93.78s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-504309
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-504309-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-504309-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (67.131437ms)

                                                
                                                
-- stdout --
	* [multinode-504309-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20317
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20317-466901/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-466901/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-504309-m02' is duplicated with machine name 'multinode-504309-m02' in profile 'multinode-504309'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-504309-m03 --driver=kvm2  --container-runtime=containerd
E0127 13:03:39.253587  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-504309-m03 --driver=kvm2  --container-runtime=containerd: (45.856440663s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-504309
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-504309: exit status 80 (224.79537ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-504309 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-504309-m03 already exists in multinode-504309-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-504309-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-504309-m03: (1.018909541s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.22s)

                                                
                                    
x
+
TestPreload (264.28s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-918195 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-918195 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m59.687041799s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-918195 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-918195 image pull gcr.io/k8s-minikube/busybox: (1.720456789s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-918195
E0127 13:06:24.100823  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/functional-293873/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-918195: (1m30.798112398s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-918195 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-918195 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (50.742627237s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-918195 image list
helpers_test.go:175: Cleaning up "test-preload-918195" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-918195
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-918195: (1.080275113s)
--- PASS: TestPreload (264.28s)

                                                
                                    
x
+
TestScheduledStopUnix (115s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-143488 --memory=2048 --driver=kvm2  --container-runtime=containerd
E0127 13:08:22.333151  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:08:39.255483  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-143488 --memory=2048 --driver=kvm2  --container-runtime=containerd: (43.246951782s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-143488 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-143488 -n scheduled-stop-143488
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-143488 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0127 13:08:51.828227  474275 retry.go:31] will retry after 120.952µs: open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/scheduled-stop-143488/pid: no such file or directory
I0127 13:08:51.829393  474275 retry.go:31] will retry after 183.971µs: open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/scheduled-stop-143488/pid: no such file or directory
I0127 13:08:51.830565  474275 retry.go:31] will retry after 197.642µs: open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/scheduled-stop-143488/pid: no such file or directory
I0127 13:08:51.831704  474275 retry.go:31] will retry after 214.297µs: open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/scheduled-stop-143488/pid: no such file or directory
I0127 13:08:51.832869  474275 retry.go:31] will retry after 709.736µs: open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/scheduled-stop-143488/pid: no such file or directory
I0127 13:08:51.834031  474275 retry.go:31] will retry after 388.781µs: open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/scheduled-stop-143488/pid: no such file or directory
I0127 13:08:51.835191  474275 retry.go:31] will retry after 752.96µs: open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/scheduled-stop-143488/pid: no such file or directory
I0127 13:08:51.836330  474275 retry.go:31] will retry after 1.213121ms: open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/scheduled-stop-143488/pid: no such file or directory
I0127 13:08:51.838562  474275 retry.go:31] will retry after 1.324667ms: open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/scheduled-stop-143488/pid: no such file or directory
I0127 13:08:51.840769  474275 retry.go:31] will retry after 5.351234ms: open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/scheduled-stop-143488/pid: no such file or directory
I0127 13:08:51.847027  474275 retry.go:31] will retry after 6.002107ms: open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/scheduled-stop-143488/pid: no such file or directory
I0127 13:08:51.853271  474275 retry.go:31] will retry after 9.89626ms: open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/scheduled-stop-143488/pid: no such file or directory
I0127 13:08:51.863568  474275 retry.go:31] will retry after 10.52309ms: open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/scheduled-stop-143488/pid: no such file or directory
I0127 13:08:51.874861  474275 retry.go:31] will retry after 16.346156ms: open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/scheduled-stop-143488/pid: no such file or directory
I0127 13:08:51.892175  474275 retry.go:31] will retry after 15.588069ms: open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/scheduled-stop-143488/pid: no such file or directory
I0127 13:08:51.908460  474275 retry.go:31] will retry after 40.330894ms: open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/scheduled-stop-143488/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-143488 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-143488 -n scheduled-stop-143488
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-143488
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-143488 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-143488
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-143488: exit status 7 (76.52457ms)

                                                
                                                
-- stdout --
	scheduled-stop-143488
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-143488 -n scheduled-stop-143488
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-143488 -n scheduled-stop-143488: exit status 7 (72.632776ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-143488" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-143488
--- PASS: TestScheduledStopUnix (115.00s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (195.63s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4070163338 start -p running-upgrade-036750 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E0127 13:11:24.097471  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/functional-293873/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4070163338 start -p running-upgrade-036750 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m10.774747852s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-036750 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-036750 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m3.265509007s)
helpers_test.go:175: Cleaning up "running-upgrade-036750" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-036750
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-036750: (1.199478063s)
--- PASS: TestRunningBinaryUpgrade (195.63s)

                                                
                                    
x
+
TestKubernetesUpgrade (208.3s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-090497 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-090497 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m26.505283392s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-090497
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-090497: (1.678051895s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-090497 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-090497 status --format={{.Host}}: exit status 7 (99.93627ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-090497 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-090497 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (49.297322257s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-090497 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-090497 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-090497 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (88.826226ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-090497] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20317
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20317-466901/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-466901/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-090497
	    minikube start -p kubernetes-upgrade-090497 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0904972 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-090497 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-090497 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-090497 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m9.330500735s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-090497" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-090497
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-090497: (1.237838114s)
--- PASS: TestKubernetesUpgrade (208.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-911228 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-911228 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (88.271842ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-911228] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20317
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20317-466901/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-466901/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (95.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-911228 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-911228 --driver=kvm2  --container-runtime=containerd: (1m34.989409237s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-911228 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (95.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-744060 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-744060 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (115.169196ms)

                                                
                                                
-- stdout --
	* [false-744060] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20317
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20317-466901/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-466901/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 13:11:32.573932  508187 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:11:32.574065  508187 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:11:32.574075  508187 out.go:358] Setting ErrFile to fd 2...
	I0127 13:11:32.574082  508187 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:11:32.574259  508187 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-466901/.minikube/bin
	I0127 13:11:32.574855  508187 out.go:352] Setting JSON to false
	I0127 13:11:32.575832  508187 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":35590,"bootTime":1737947903,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 13:11:32.575908  508187 start.go:139] virtualization: kvm guest
	I0127 13:11:32.578305  508187 out.go:177] * [false-744060] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 13:11:32.579926  508187 out.go:177]   - MINIKUBE_LOCATION=20317
	I0127 13:11:32.579931  508187 notify.go:220] Checking for updates...
	I0127 13:11:32.582435  508187 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:11:32.583715  508187 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20317-466901/kubeconfig
	I0127 13:11:32.584995  508187 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-466901/.minikube
	I0127 13:11:32.587024  508187 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 13:11:32.588309  508187 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 13:11:32.590208  508187 config.go:182] Loaded profile config "NoKubernetes-911228": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 13:11:32.590382  508187 config.go:182] Loaded profile config "kubernetes-upgrade-090497": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 13:11:32.590511  508187 config.go:182] Loaded profile config "running-upgrade-036750": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.1
	I0127 13:11:32.590624  508187 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 13:11:32.626724  508187 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 13:11:32.628221  508187 start.go:297] selected driver: kvm2
	I0127 13:11:32.628239  508187 start.go:901] validating driver "kvm2" against <nil>
	I0127 13:11:32.628252  508187 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 13:11:32.630333  508187 out.go:201] 
	W0127 13:11:32.631529  508187 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0127 13:11:32.632887  508187 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-744060 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-744060

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-744060

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-744060

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-744060

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-744060

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-744060

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-744060

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-744060

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-744060

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-744060

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-744060"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-744060"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-744060"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-744060

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-744060"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-744060"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-744060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-744060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-744060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-744060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-744060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-744060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-744060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-744060" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-744060"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-744060"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-744060"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-744060"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-744060"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-744060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-744060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-744060" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-744060"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-744060"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-744060"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-744060"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-744060"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-744060

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-744060"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-744060"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-744060"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-744060"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-744060"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-744060"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-744060"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-744060"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-744060"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-744060"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-744060"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-744060"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-744060"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-744060"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-744060"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-744060"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-744060"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-744060"

                                                
                                                
----------------------- debugLogs end: false-744060 [took: 2.979511157s] --------------------------------
helpers_test.go:175: Cleaning up "false-744060" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-744060
--- PASS: TestNetworkPlugins/group/false (3.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (38.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-911228 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-911228 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (37.153478905s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-911228 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-911228 status -o json: exit status 2 (262.577035ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-911228","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-911228
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-911228: (1.111708878s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (38.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (47.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-911228 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-911228 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (47.783083398s)
--- PASS: TestNoKubernetes/serial/Start (47.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-911228 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-911228 "sudo systemctl is-active --quiet service kubelet": exit status 1 (228.427596ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-911228
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-911228: (1.535663179s)
--- PASS: TestNoKubernetes/serial/Stop (1.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (44.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-911228 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-911228 --driver=kvm2  --container-runtime=containerd: (44.797763131s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (44.80s)

                                                
                                    
x
+
TestPause/serial/Start (113.03s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-831988 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-831988 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m53.033255817s)
--- PASS: TestPause/serial/Start (113.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-911228 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-911228 "sudo systemctl is-active --quiet service kubelet": exit status 1 (217.564906ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (179.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1739461546 start -p stopped-upgrade-973088 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1739461546 start -p stopped-upgrade-973088 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m11.949335705s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1739461546 -p stopped-upgrade-973088 stop
E0127 13:16:07.165676  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/functional-293873/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1739461546 -p stopped-upgrade-973088 stop: (2.187836447s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-973088 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0127 13:16:24.097119  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/functional-293873/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-973088 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (45.72512475s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (179.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (121.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-744060 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-744060 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (2m1.260031783s)
--- PASS: TestNetworkPlugins/group/auto/Start (121.26s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (93.41s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-831988 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-831988 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m33.387560294s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (93.41s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-973088
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (64.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-744060 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-744060 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m4.04364226s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (64.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-744060 "pgrep -a kubelet"
I0127 13:17:08.613941  474275 config.go:182] Loaded profile config "auto-744060": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-744060 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-qvqgr" [e9ede19f-945a-4a6e-bc7b-bfa1994f969d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-qvqgr" [e9ede19f-945a-4a6e-bc7b-bfa1994f969d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004460689s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.26s)

                                                
                                    
x
+
TestPause/serial/Pause (0.77s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-831988 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-744060 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-744060 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-831988 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-831988 --output=json --layout=cluster: exit status 2 (281.124797ms)

                                                
                                                
-- stdout --
	{"Name":"pause-831988","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-831988","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-744060 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-831988 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.66s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.8s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-831988 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.80s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.09s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-831988 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-831988 --alsologtostderr -v=5: (1.089462293s)
--- PASS: TestPause/serial/DeletePaused (1.09s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.69s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (86.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-744060 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-744060 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m26.235621183s)
--- PASS: TestNetworkPlugins/group/calico/Start (86.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (96.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-744060 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-744060 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m36.087366494s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (96.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-rv2jt" [d6d7e895-8e22-4fdf-8550-6112b2040f21] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005027989s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-744060 "pgrep -a kubelet"
I0127 13:18:07.078700  474275 config.go:182] Loaded profile config "kindnet-744060": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-744060 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-wwd8f" [d9a1e22a-f242-4a26-a304-9a29732c41a9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-wwd8f" [d9a1e22a-f242-4a26-a304-9a29732c41a9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.005883352s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-744060 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-744060 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-744060 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (88.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-744060 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
E0127 13:18:39.253564  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-744060 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m28.804418915s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (88.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (97.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-744060 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-744060 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m37.465009645s)
--- PASS: TestNetworkPlugins/group/flannel/Start (97.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-9gjxl" [0d17e63a-7231-4809-aa98-4014e49937d5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005115486s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-744060 "pgrep -a kubelet"
I0127 13:18:54.292579  474275 config.go:182] Loaded profile config "calico-744060": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-744060 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-24ggf" [ad172d22-cedc-4c6b-a3b9-aa33a1f32b8c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-24ggf" [ad172d22-cedc-4c6b-a3b9-aa33a1f32b8c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.00414717s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-744060 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-744060 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-744060 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-744060 "pgrep -a kubelet"
I0127 13:19:12.103866  474275 config.go:182] Loaded profile config "custom-flannel-744060": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-744060 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-wbg2f" [802fbb62-05c2-4673-88c5-55fc13a4d698] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-wbg2f" [802fbb62-05c2-4673-88c5-55fc13a4d698] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004733439s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-744060 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-744060 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-744060 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (100.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-744060 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-744060 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m40.540949402s)
--- PASS: TestNetworkPlugins/group/bridge/Start (100.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (201.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-116657 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-116657 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (3m21.054850301s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (201.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-744060 "pgrep -a kubelet"
I0127 13:20:02.769877  474275 config.go:182] Loaded profile config "enable-default-cni-744060": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-744060 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-slhw2" [55a86f5b-7e38-49c0-9f40-b3daa5c88913] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-slhw2" [55a86f5b-7e38-49c0-9f40-b3daa5c88913] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004094342s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-744060 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-744060 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-744060 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-5hl6g" [1eb242ba-0451-405f-b636-ad478b90582d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.008443392s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-744060 "pgrep -a kubelet"
I0127 13:20:27.639711  474275 config.go:182] Loaded profile config "flannel-744060": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-744060 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-9tfqp" [dcc1c3cc-3d59-4032-aa99-e9e098ea9d1b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-9tfqp" [dcc1c3cc-3d59-4032-aa99-e9e098ea9d1b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.003938734s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (76.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-325431 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-325431 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (1m16.782020313s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (76.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-744060 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-744060 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-744060 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (88.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-766944 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-766944 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (1m28.837373346s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (88.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-744060 "pgrep -a kubelet"
I0127 13:21:04.051175  474275 config.go:182] Loaded profile config "bridge-744060": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-744060 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-cfhff" [c7b83b39-9551-4ba2-a5d1-b5531915ba4e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-cfhff" [c7b83b39-9551-4ba2-a5d1-b5531915ba4e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003732863s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-744060 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-744060 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-744060 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)
E0127 13:30:03.077204  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/enable-default-cni-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:30:21.306975  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:30:30.778890  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/enable-default-cni-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:30:46.221079  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/old-k8s-version-116657/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:30:49.011736  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:31:04.314520  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/bridge-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:31:24.097201  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/functional-293873/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:31:32.017248  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/bridge-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:32:08.858925  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/auto-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:32:47.167447  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/functional-293873/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:33:00.843049  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/kindnet-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:33:02.358073  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/old-k8s-version-116657/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:33:30.063450  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/old-k8s-version-116657/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:33:39.253784  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:33:48.071668  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/calico-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:34:12.367232  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/custom-flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:35:03.076882  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/enable-default-cni-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:35:21.307401  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:36:04.313683  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/bridge-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:36:24.096953  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/functional-293873/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:37:08.859064  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/auto-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:38:00.842949  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/kindnet-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:38:02.358028  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/old-k8s-version-116657/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:38:31.923071  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/auto-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:38:39.253296  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:38:48.071431  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/calico-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:39:12.367094  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/custom-flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:39:23.908523  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/kindnet-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:40:03.076945  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/enable-default-cni-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:40:11.136552  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/calico-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:40:21.307295  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:40:35.432040  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/custom-flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:41:04.313623  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/bridge-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:41:24.097165  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/functional-293873/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:41:26.141203  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/enable-default-cni-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:41:42.336786  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:41:44.373708  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:42:08.859406  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/auto-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:42:27.379625  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/bridge-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:43:00.842435  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/kindnet-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:43:02.358025  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/old-k8s-version-116657/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:43:39.253184  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:43:48.072010  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/calico-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:44:12.367245  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/custom-flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:44:25.424881  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/old-k8s-version-116657/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:45:03.077488  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/enable-default-cni-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:45:21.307379  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:46:04.313604  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/bridge-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:46:24.097807  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/functional-293873/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:47:08.859375  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/auto-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:48:00.842238  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/kindnet-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:48:02.358020  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/old-k8s-version-116657/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:48:39.253468  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:48:48.072804  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/calico-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:49:12.367189  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/custom-flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:49:27.169788  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/functional-293873/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:50:03.076963  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/enable-default-cni-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:50:21.307360  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/flannel-744060/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (59.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-325510 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-325510 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (59.281319944s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (59.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-325431 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1d095dcc-28cc-477d-a58b-2ad64c627a8e] Pending
helpers_test.go:344: "busybox" [1d095dcc-28cc-477d-a58b-2ad64c627a8e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1d095dcc-28cc-477d-a58b-2ad64c627a8e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.00346961s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-325431 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-325431 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-325431 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-325431 --alsologtostderr -v=3
E0127 13:22:08.858707  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/auto-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:22:08.865242  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/auto-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:22:08.876789  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/auto-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:22:08.898431  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/auto-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:22:08.939988  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/auto-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:22:09.021483  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/auto-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:22:09.183039  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/auto-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:22:09.505187  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/auto-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:22:10.147447  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/auto-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:22:11.429130  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/auto-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:22:13.990558  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/auto-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:22:19.112404  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/auto-744060/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-325431 --alsologtostderr -v=3: (1m31.021041673s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-766944 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [22de50e1-1bea-4dac-8945-76347d11d8c9] Pending
helpers_test.go:344: "busybox" [22de50e1-1bea-4dac-8945-76347d11d8c9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0127 13:22:29.354562  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/auto-744060/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [22de50e1-1bea-4dac-8945-76347d11d8c9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004351496s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-766944 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-325510 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [123b9056-3653-4f30-be97-1a95d3c246fc] Pending
helpers_test.go:344: "busybox" [123b9056-3653-4f30-be97-1a95d3c246fc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [123b9056-3653-4f30-be97-1a95d3c246fc] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004825134s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-325510 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-766944 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-766944 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-766944 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-766944 --alsologtostderr -v=3: (1m31.082859974s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-325510 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-325510 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-325510 --alsologtostderr -v=3
E0127 13:22:49.836410  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/auto-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:23:00.842924  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/kindnet-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:23:00.849320  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/kindnet-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:23:00.860726  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/kindnet-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:23:00.882141  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/kindnet-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:23:00.923634  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/kindnet-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:23:01.005164  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/kindnet-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:23:01.167013  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/kindnet-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:23:01.488817  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/kindnet-744060/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-325510 --alsologtostderr -v=3: (1m31.207237021s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-116657 create -f testdata/busybox.yaml
E0127 13:23:02.130669  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/kindnet-744060/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [aaf868ff-ef4c-43b0-bb7f-f2d23a1e7ed1] Pending
helpers_test.go:344: "busybox" [aaf868ff-ef4c-43b0-bb7f-f2d23a1e7ed1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0127 13:23:03.412981  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/kindnet-744060/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [aaf868ff-ef4c-43b0-bb7f-f2d23a1e7ed1] Running
E0127 13:23:05.974676  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/kindnet-744060/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.005193057s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-116657 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-116657 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0127 13:23:11.096927  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/kindnet-744060/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-116657 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (91.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-116657 --alsologtostderr -v=3
E0127 13:23:21.338562  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/kindnet-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:23:30.798104  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/auto-744060/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-116657 --alsologtostderr -v=3: (1m31.171546454s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (91.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-325431 -n no-preload-325431
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-325431 -n no-preload-325431: exit status 7 (67.761107ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-325431 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-766944 -n embed-certs-766944
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-766944 -n embed-certs-766944: exit status 7 (72.590875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-766944 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-325510 -n default-k8s-diff-port-325510
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-325510 -n default-k8s-diff-port-325510: exit status 7 (72.512231ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-325510 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-116657 -n old-k8s-version-116657
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-116657 -n old-k8s-version-116657: exit status 7 (88.88197ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-116657 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (174.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-116657 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
E0127 13:24:52.719436  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/auto-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:24:53.344446  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/custom-flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:25:02.335340  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:25:03.077621  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/enable-default-cni-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:25:03.084066  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/enable-default-cni-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:25:03.095544  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/enable-default-cni-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:25:03.116801  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/enable-default-cni-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:25:03.158271  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/enable-default-cni-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:25:03.239859  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/enable-default-cni-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:25:03.401494  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/enable-default-cni-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:25:03.723345  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/enable-default-cni-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:25:04.364950  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/enable-default-cni-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:25:05.647132  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/enable-default-cni-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:25:08.209547  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/enable-default-cni-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:25:10.010970  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/calico-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:25:13.330841  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/enable-default-cni-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:25:21.306622  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:25:21.313020  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:25:21.324477  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:25:21.345928  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:25:21.387450  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:25:21.468979  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:25:21.631305  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:25:21.953529  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:25:22.595675  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:25:23.572373  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/enable-default-cni-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:25:23.878063  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:25:26.440244  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:25:31.561712  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:25:34.306694  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/custom-flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:25:41.803387  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:25:44.054234  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/enable-default-cni-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:25:44.704517  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/kindnet-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:26:02.285596  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:26:04.313682  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/bridge-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:26:04.320112  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/bridge-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:26:04.331589  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/bridge-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:26:04.353005  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/bridge-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:26:04.395146  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/bridge-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:26:04.476737  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/bridge-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:26:04.638736  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/bridge-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:26:04.960482  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/bridge-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:26:05.602576  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/bridge-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:26:06.884265  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/bridge-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:26:09.446187  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/bridge-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:26:14.568334  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/bridge-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:26:24.097674  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/functional-293873/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:26:24.810178  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/bridge-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:26:25.015747  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/enable-default-cni-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:26:31.932780  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/calico-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:26:43.247350  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:26:45.292177  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/bridge-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:26:56.228824  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/custom-flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:08.859235  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/auto-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:26.253577  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/bridge-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:36.561110  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/auto-744060/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-116657 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m54.032140753s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-116657 -n old-k8s-version-116657
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (174.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-944gr" [8ba5d6fa-bbbf-4a4a-9284-6fbc30be524c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005092545s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-944gr" [8ba5d6fa-bbbf-4a4a-9284-6fbc30be524c] Running
E0127 13:27:46.937453  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/enable-default-cni-744060/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004312552s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-116657 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-116657 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-116657 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-116657 -n old-k8s-version-116657
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-116657 -n old-k8s-version-116657: exit status 2 (262.244751ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-116657 -n old-k8s-version-116657
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-116657 -n old-k8s-version-116657: exit status 2 (259.251669ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-116657 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-116657 -n old-k8s-version-116657
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-116657 -n old-k8s-version-116657
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (53.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-296225 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 13:28:00.842980  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/kindnet-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:02.359045  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/old-k8s-version-116657/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:02.365483  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/old-k8s-version-116657/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:02.376856  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/old-k8s-version-116657/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:02.398356  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/old-k8s-version-116657/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:02.439908  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/old-k8s-version-116657/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:02.521422  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/old-k8s-version-116657/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:02.682882  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/old-k8s-version-116657/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:03.004491  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/old-k8s-version-116657/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:03.646106  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/old-k8s-version-116657/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:04.927481  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/old-k8s-version-116657/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:05.169752  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:07.489421  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/old-k8s-version-116657/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:12.611278  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/old-k8s-version-116657/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:22.853424  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/old-k8s-version-116657/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:28.546142  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/kindnet-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:39.253001  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:43.335756  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/old-k8s-version-116657/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-296225 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (53.479362515s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (53.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-296225 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0127 13:28:48.071129  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/calico-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:48.175809  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/bridge-744060/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-296225 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.481503577s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-296225 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-296225 --alsologtostderr -v=3: (7.388084705s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-296225 -n newest-cni-296225
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-296225 -n newest-cni-296225: exit status 7 (87.40788ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-296225 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (39.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-296225 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 13:29:12.367379  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/custom-flannel-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:15.774534  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/calico-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:24.297919  474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/old-k8s-version-116657/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-296225 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (39.549307726s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-296225 -n newest-cni-296225
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (39.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-296225 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-296225 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-296225 -n newest-cni-296225
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-296225 -n newest-cni-296225: exit status 2 (266.990044ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-296225 -n newest-cni-296225
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-296225 -n newest-cni-296225: exit status 2 (304.236719ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-296225 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-296225 -n newest-cni-296225
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-296225 -n newest-cni-296225
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.97s)

                                                
                                    

Test skip (38/316)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.1/cached-images 0
15 TestDownloadOnly/v1.32.1/binaries 0
16 TestDownloadOnly/v1.32.1/kubectl 0
20 TestDownloadOnlyKic 0
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
117 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
119 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
121 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestGvisorAddon 0
178 TestImageBuild 0
205 TestKicCustomNetwork 0
206 TestKicExistingNetwork 0
207 TestKicCustomSubnet 0
208 TestKicStaticIP 0
240 TestChangeNoneUser 0
243 TestScheduledStopWindows 0
245 TestSkaffold 0
247 TestInsufficientStorage 0
251 TestMissingContainerUpgrade 0
257 TestNetworkPlugins/group/kubenet 3.57
265 TestNetworkPlugins/group/cilium 3.59
280 TestStartStop/group/disable-driver-mounts 0.23
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-744060 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-744060

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-744060

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-744060

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-744060

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-744060

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-744060

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-744060

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-744060

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-744060

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-744060

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-744060"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-744060"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-744060"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-744060

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-744060"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-744060"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-744060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-744060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-744060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-744060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-744060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-744060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-744060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-744060" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-744060"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-744060"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-744060"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-744060"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-744060"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-744060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-744060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-744060" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-744060"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-744060"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-744060"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-744060"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-744060"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20317-466901/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Jan 2025 13:11:24 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.50.60:8443
name: kubernetes-upgrade-090497
contexts:
- context:
cluster: kubernetes-upgrade-090497
extensions:
- extension:
last-update: Mon, 27 Jan 2025 13:11:24 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: kubernetes-upgrade-090497
name: kubernetes-upgrade-090497
current-context: kubernetes-upgrade-090497
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-090497
user:
client-certificate: /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/kubernetes-upgrade-090497/client.crt
client-key: /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/kubernetes-upgrade-090497/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-744060

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-744060"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-744060"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-744060"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-744060"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-744060"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-744060"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-744060"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-744060"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-744060"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-744060"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-744060"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-744060"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-744060"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-744060"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-744060"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-744060"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-744060"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-744060"

                                                
                                                
----------------------- debugLogs end: kubenet-744060 [took: 3.408527119s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-744060" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-744060
--- SKIP: TestNetworkPlugins/group/kubenet (3.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-744060 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-744060

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-744060

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-744060

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-744060

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-744060

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-744060

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-744060

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-744060

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-744060

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-744060

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-744060"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-744060"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-744060"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-744060

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-744060"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-744060"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-744060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-744060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-744060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-744060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-744060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-744060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-744060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-744060" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-744060"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-744060"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-744060"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-744060"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-744060"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-744060

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-744060

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-744060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-744060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-744060

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-744060

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-744060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-744060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-744060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-744060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-744060" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-744060"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-744060"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-744060"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-744060"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-744060"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20317-466901/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Jan 2025 13:11:36 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.61.36:8443
name: NoKubernetes-911228
contexts:
- context:
cluster: NoKubernetes-911228
extensions:
- extension:
last-update: Mon, 27 Jan 2025 13:11:36 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: NoKubernetes-911228
name: NoKubernetes-911228
current-context: NoKubernetes-911228
kind: Config
preferences: {}
users:
- name: NoKubernetes-911228
user:
client-certificate: /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/NoKubernetes-911228/client.crt
client-key: /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/NoKubernetes-911228/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-744060

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-744060"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-744060"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-744060"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-744060"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-744060"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-744060"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-744060"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-744060"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-744060"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-744060"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-744060"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-744060"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-744060"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-744060"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-744060"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-744060"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-744060"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-744060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-744060"

                                                
                                                
----------------------- debugLogs end: cilium-744060 [took: 3.436986458s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-744060" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-744060
--- SKIP: TestNetworkPlugins/group/cilium (3.59s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-904288" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-904288
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard