Test Report: KVM_Linux_containerd 20319

                    
                      648f194b476483b13df21998417ef6977c25d9d6:2025-01-27:38091
                    
                

Test fail (2/272)

Order failed test Duration
315 TestStartStop/group/no-preload/serial/SecondStart 1591.89
319 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 7200.057
x
+
TestStartStop/group/no-preload/serial/SecondStart (1591.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-976043 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-976043 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: signal: killed (26m29.889825874s)

                                                
                                                
-- stdout --
	* [no-preload-976043] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20319-348858/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-348858/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "no-preload-976043" primary control-plane node in "no-preload-976043" cluster
	* Restarting existing kvm2 VM for "no-preload-976043" ...
	* Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-976043 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:42:03.096599  397538 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:42:03.096697  397538 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:42:03.096709  397538 out.go:358] Setting ErrFile to fd 2...
	I0127 11:42:03.096716  397538 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:42:03.096879  397538 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-348858/.minikube/bin
	I0127 11:42:03.097419  397538 out.go:352] Setting JSON to false
	I0127 11:42:03.098366  397538 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8668,"bootTime":1737969455,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 11:42:03.098467  397538 start.go:139] virtualization: kvm guest
	I0127 11:42:03.100127  397538 out.go:177] * [no-preload-976043] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 11:42:03.101226  397538 out.go:177]   - MINIKUBE_LOCATION=20319
	I0127 11:42:03.101319  397538 notify.go:220] Checking for updates...
	I0127 11:42:03.103248  397538 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:42:03.104291  397538 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20319-348858/kubeconfig
	I0127 11:42:03.105193  397538 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-348858/.minikube
	I0127 11:42:03.106107  397538 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 11:42:03.107049  397538 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 11:42:03.108359  397538 config.go:182] Loaded profile config "no-preload-976043": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 11:42:03.108703  397538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:42:03.108755  397538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:42:03.124139  397538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41393
	I0127 11:42:03.124588  397538 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:42:03.125155  397538 main.go:141] libmachine: Using API Version  1
	I0127 11:42:03.125177  397538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:42:03.125481  397538 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:42:03.125688  397538 main.go:141] libmachine: (no-preload-976043) Calling .DriverName
	I0127 11:42:03.125890  397538 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:42:03.126145  397538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:42:03.126181  397538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:42:03.140430  397538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35629
	I0127 11:42:03.140751  397538 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:42:03.141193  397538 main.go:141] libmachine: Using API Version  1
	I0127 11:42:03.141215  397538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:42:03.141543  397538 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:42:03.141731  397538 main.go:141] libmachine: (no-preload-976043) Calling .DriverName
	I0127 11:42:03.174305  397538 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 11:42:03.175428  397538 start.go:297] selected driver: kvm2
	I0127 11:42:03.175443  397538 start.go:901] validating driver "kvm2" against &{Name:no-preload-976043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-976043 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.171 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:42:03.175564  397538 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 11:42:03.176243  397538 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:42:03.176336  397538 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20319-348858/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 11:42:03.190164  397538 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 11:42:03.190564  397538 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 11:42:03.190600  397538 cni.go:84] Creating CNI manager for ""
	I0127 11:42:03.190655  397538 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 11:42:03.190698  397538 start.go:340] cluster config:
	{Name:no-preload-976043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-976043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.171 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-hos
t Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:42:03.190821  397538 iso.go:125] acquiring lock: {Name:mk6cdd2a3d0bfb3682c1f0c806368944f23c4809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:42:03.192311  397538 out.go:177] * Starting "no-preload-976043" primary control-plane node in "no-preload-976043" cluster
	I0127 11:42:03.193390  397538 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 11:42:03.193514  397538 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/config.json ...
	I0127 11:42:03.193660  397538 cache.go:107] acquiring lock: {Name:mkb3b538314fd62eab2309dcd5112da57bc5e70f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:42:03.193683  397538 cache.go:107] acquiring lock: {Name:mk15fb5de5283e9b279b6db3ee8dc9560c2058d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:42:03.193688  397538 cache.go:107] acquiring lock: {Name:mkb29ea1858769de0fd0373c310163fc2fa627dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:42:03.193754  397538 cache.go:115] /home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 exists
	I0127 11:42:03.193764  397538 cache.go:115] /home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0127 11:42:03.193775  397538 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 94.986µs
	I0127 11:42:03.193768  397538 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.1" -> "/home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1" took 116.148µs
	I0127 11:42:03.193777  397538 start.go:360] acquireMachinesLock for no-preload-976043: {Name:mk69dba1a41baeb0794a28159a5cef220370e224 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 11:42:03.193792  397538 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0127 11:42:03.193767  397538 cache.go:107] acquiring lock: {Name:mk5cca8e3a1343f5fa2a41e9d49b890938823fec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:42:03.193814  397538 cache.go:115] /home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 exists
	I0127 11:42:03.193810  397538 cache.go:107] acquiring lock: {Name:mk40f0ef462377ecb38e4605d0b4126cd486f9ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:42:03.193821  397538 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.1" -> "/home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1" took 151.448µs
	I0127 11:42:03.193830  397538 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.1 -> /home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 succeeded
	I0127 11:42:03.193806  397538 cache.go:107] acquiring lock: {Name:mkcde954d35adaaae82458dd5942fd51fc6d4bb7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:42:03.193895  397538 cache.go:115] /home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 exists
	I0127 11:42:03.193795  397538 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.1 -> /home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 succeeded
	I0127 11:42:03.193775  397538 cache.go:107] acquiring lock: {Name:mkd4e82fceee3273a1d5d1b137294af730b923cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:42:03.193915  397538 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.1" -> "/home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1" took 144.555µs
	I0127 11:42:03.193932  397538 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.1 -> /home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 succeeded
	I0127 11:42:03.193938  397538 cache.go:115] /home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0127 11:42:03.193905  397538 cache.go:115] /home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0127 11:42:03.193948  397538 cache.go:115] /home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 exists
	I0127 11:42:03.193948  397538 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 140.139µs
	I0127 11:42:03.193954  397538 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 226.702µs
	I0127 11:42:03.193959  397538 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0127 11:42:03.193962  397538 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0127 11:42:03.193960  397538 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0" took 187.88µs
	I0127 11:42:03.193970  397538 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 succeeded
	I0127 11:42:03.193964  397538 cache.go:107] acquiring lock: {Name:mk24732c35ec239b8e7de95e39891c358710fa1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:42:03.194057  397538 cache.go:115] /home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 exists
	I0127 11:42:03.194071  397538 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.1" -> "/home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1" took 156.361µs
	I0127 11:42:03.194080  397538 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.1 -> /home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 succeeded
	I0127 11:42:03.194086  397538 cache.go:87] Successfully saved all images to host disk.
	I0127 11:42:13.374117  397538 start.go:364] duration metric: took 10.180279664s to acquireMachinesLock for "no-preload-976043"
	I0127 11:42:13.374219  397538 start.go:96] Skipping create...Using existing machine configuration
	I0127 11:42:13.374233  397538 fix.go:54] fixHost starting: 
	I0127 11:42:13.374751  397538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:42:13.374820  397538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:42:13.391642  397538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37031
	I0127 11:42:13.392129  397538 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:42:13.392697  397538 main.go:141] libmachine: Using API Version  1
	I0127 11:42:13.392719  397538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:42:13.393131  397538 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:42:13.393340  397538 main.go:141] libmachine: (no-preload-976043) Calling .DriverName
	I0127 11:42:13.393471  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetState
	I0127 11:42:13.395004  397538 fix.go:112] recreateIfNeeded on no-preload-976043: state=Stopped err=<nil>
	I0127 11:42:13.395037  397538 main.go:141] libmachine: (no-preload-976043) Calling .DriverName
	W0127 11:42:13.395190  397538 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 11:42:13.397334  397538 out.go:177] * Restarting existing kvm2 VM for "no-preload-976043" ...
	I0127 11:42:13.398351  397538 main.go:141] libmachine: (no-preload-976043) Calling .Start
	I0127 11:42:13.398480  397538 main.go:141] libmachine: (no-preload-976043) starting domain...
	I0127 11:42:13.398507  397538 main.go:141] libmachine: (no-preload-976043) ensuring networks are active...
	I0127 11:42:13.399264  397538 main.go:141] libmachine: (no-preload-976043) Ensuring network default is active
	I0127 11:42:13.399609  397538 main.go:141] libmachine: (no-preload-976043) Ensuring network mk-no-preload-976043 is active
	I0127 11:42:13.399975  397538 main.go:141] libmachine: (no-preload-976043) getting domain XML...
	I0127 11:42:13.400687  397538 main.go:141] libmachine: (no-preload-976043) creating domain...
	I0127 11:42:13.745414  397538 main.go:141] libmachine: (no-preload-976043) waiting for IP...
	I0127 11:42:13.746381  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:13.746824  397538 main.go:141] libmachine: (no-preload-976043) DBG | unable to find current IP address of domain no-preload-976043 in network mk-no-preload-976043
	I0127 11:42:13.746909  397538 main.go:141] libmachine: (no-preload-976043) DBG | I0127 11:42:13.746812  397639 retry.go:31] will retry after 204.398172ms: waiting for domain to come up
	I0127 11:42:13.953424  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:13.954027  397538 main.go:141] libmachine: (no-preload-976043) DBG | unable to find current IP address of domain no-preload-976043 in network mk-no-preload-976043
	I0127 11:42:13.954091  397538 main.go:141] libmachine: (no-preload-976043) DBG | I0127 11:42:13.953996  397639 retry.go:31] will retry after 235.784526ms: waiting for domain to come up
	I0127 11:42:14.191602  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:14.192187  397538 main.go:141] libmachine: (no-preload-976043) DBG | unable to find current IP address of domain no-preload-976043 in network mk-no-preload-976043
	I0127 11:42:14.192227  397538 main.go:141] libmachine: (no-preload-976043) DBG | I0127 11:42:14.192155  397639 retry.go:31] will retry after 427.633149ms: waiting for domain to come up
	I0127 11:42:14.621752  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:14.622243  397538 main.go:141] libmachine: (no-preload-976043) DBG | unable to find current IP address of domain no-preload-976043 in network mk-no-preload-976043
	I0127 11:42:14.622296  397538 main.go:141] libmachine: (no-preload-976043) DBG | I0127 11:42:14.622209  397639 retry.go:31] will retry after 570.191522ms: waiting for domain to come up
	I0127 11:42:15.193966  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:15.194462  397538 main.go:141] libmachine: (no-preload-976043) DBG | unable to find current IP address of domain no-preload-976043 in network mk-no-preload-976043
	I0127 11:42:15.194494  397538 main.go:141] libmachine: (no-preload-976043) DBG | I0127 11:42:15.194431  397639 retry.go:31] will retry after 543.673911ms: waiting for domain to come up
	I0127 11:42:15.739921  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:15.740528  397538 main.go:141] libmachine: (no-preload-976043) DBG | unable to find current IP address of domain no-preload-976043 in network mk-no-preload-976043
	I0127 11:42:15.740569  397538 main.go:141] libmachine: (no-preload-976043) DBG | I0127 11:42:15.740451  397639 retry.go:31] will retry after 783.899267ms: waiting for domain to come up
	I0127 11:42:16.526619  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:16.527159  397538 main.go:141] libmachine: (no-preload-976043) DBG | unable to find current IP address of domain no-preload-976043 in network mk-no-preload-976043
	I0127 11:42:16.527192  397538 main.go:141] libmachine: (no-preload-976043) DBG | I0127 11:42:16.527133  397639 retry.go:31] will retry after 965.500175ms: waiting for domain to come up
	I0127 11:42:17.494011  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:17.494568  397538 main.go:141] libmachine: (no-preload-976043) DBG | unable to find current IP address of domain no-preload-976043 in network mk-no-preload-976043
	I0127 11:42:17.494600  397538 main.go:141] libmachine: (no-preload-976043) DBG | I0127 11:42:17.494542  397639 retry.go:31] will retry after 958.680685ms: waiting for domain to come up
	I0127 11:42:18.454599  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:18.455062  397538 main.go:141] libmachine: (no-preload-976043) DBG | unable to find current IP address of domain no-preload-976043 in network mk-no-preload-976043
	I0127 11:42:18.455095  397538 main.go:141] libmachine: (no-preload-976043) DBG | I0127 11:42:18.455018  397639 retry.go:31] will retry after 1.186565059s: waiting for domain to come up
	I0127 11:42:19.643447  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:19.644022  397538 main.go:141] libmachine: (no-preload-976043) DBG | unable to find current IP address of domain no-preload-976043 in network mk-no-preload-976043
	I0127 11:42:19.644056  397538 main.go:141] libmachine: (no-preload-976043) DBG | I0127 11:42:19.643978  397639 retry.go:31] will retry after 2.293858726s: waiting for domain to come up
	I0127 11:42:21.940384  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:21.940868  397538 main.go:141] libmachine: (no-preload-976043) DBG | unable to find current IP address of domain no-preload-976043 in network mk-no-preload-976043
	I0127 11:42:21.940893  397538 main.go:141] libmachine: (no-preload-976043) DBG | I0127 11:42:21.940830  397639 retry.go:31] will retry after 2.796298468s: waiting for domain to come up
	I0127 11:42:24.738798  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:24.739380  397538 main.go:141] libmachine: (no-preload-976043) DBG | unable to find current IP address of domain no-preload-976043 in network mk-no-preload-976043
	I0127 11:42:24.739407  397538 main.go:141] libmachine: (no-preload-976043) DBG | I0127 11:42:24.739332  397639 retry.go:31] will retry after 2.553260317s: waiting for domain to come up
	I0127 11:42:27.295899  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:27.296395  397538 main.go:141] libmachine: (no-preload-976043) DBG | unable to find current IP address of domain no-preload-976043 in network mk-no-preload-976043
	I0127 11:42:27.296425  397538 main.go:141] libmachine: (no-preload-976043) DBG | I0127 11:42:27.296366  397639 retry.go:31] will retry after 3.879381748s: waiting for domain to come up
	I0127 11:42:31.179806  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:31.180316  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has current primary IP address 192.168.72.171 and MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:31.180346  397538 main.go:141] libmachine: (no-preload-976043) found domain IP: 192.168.72.171
	I0127 11:42:31.180359  397538 main.go:141] libmachine: (no-preload-976043) reserving static IP address...
	I0127 11:42:31.180964  397538 main.go:141] libmachine: (no-preload-976043) DBG | found host DHCP lease matching {name: "no-preload-976043", mac: "52:54:00:f9:a3:49", ip: "192.168.72.171"} in network mk-no-preload-976043: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:59 +0000 UTC Type:0 Mac:52:54:00:f9:a3:49 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:no-preload-976043 Clientid:01:52:54:00:f9:a3:49}
	I0127 11:42:31.180995  397538 main.go:141] libmachine: (no-preload-976043) DBG | skip adding static IP to network mk-no-preload-976043 - found existing host DHCP lease matching {name: "no-preload-976043", mac: "52:54:00:f9:a3:49", ip: "192.168.72.171"}
	I0127 11:42:31.181015  397538 main.go:141] libmachine: (no-preload-976043) DBG | Getting to WaitForSSH function...
	I0127 11:42:31.181029  397538 main.go:141] libmachine: (no-preload-976043) reserved static IP address 192.168.72.171 for domain no-preload-976043
	I0127 11:42:31.181041  397538 main.go:141] libmachine: (no-preload-976043) waiting for SSH...
	I0127 11:42:31.183228  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:31.183587  397538 main.go:141] libmachine: (no-preload-976043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:a3:49", ip: ""} in network mk-no-preload-976043: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:59 +0000 UTC Type:0 Mac:52:54:00:f9:a3:49 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:no-preload-976043 Clientid:01:52:54:00:f9:a3:49}
	I0127 11:42:31.183622  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined IP address 192.168.72.171 and MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:31.183715  397538 main.go:141] libmachine: (no-preload-976043) DBG | Using SSH client type: external
	I0127 11:42:31.183751  397538 main.go:141] libmachine: (no-preload-976043) DBG | Using SSH private key: /home/jenkins/minikube-integration/20319-348858/.minikube/machines/no-preload-976043/id_rsa (-rw-------)
	I0127 11:42:31.183792  397538 main.go:141] libmachine: (no-preload-976043) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.171 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20319-348858/.minikube/machines/no-preload-976043/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 11:42:31.183814  397538 main.go:141] libmachine: (no-preload-976043) DBG | About to run SSH command:
	I0127 11:42:31.183827  397538 main.go:141] libmachine: (no-preload-976043) DBG | exit 0
	I0127 11:42:31.318714  397538 main.go:141] libmachine: (no-preload-976043) DBG | SSH cmd err, output: <nil>: 
	I0127 11:42:31.319136  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetConfigRaw
	I0127 11:42:31.319870  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetIP
	I0127 11:42:31.322940  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:31.323480  397538 main.go:141] libmachine: (no-preload-976043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:a3:49", ip: ""} in network mk-no-preload-976043: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:59 +0000 UTC Type:0 Mac:52:54:00:f9:a3:49 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:no-preload-976043 Clientid:01:52:54:00:f9:a3:49}
	I0127 11:42:31.323521  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined IP address 192.168.72.171 and MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:31.323843  397538 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/config.json ...
	I0127 11:42:31.324072  397538 machine.go:93] provisionDockerMachine start ...
	I0127 11:42:31.324100  397538 main.go:141] libmachine: (no-preload-976043) Calling .DriverName
	I0127 11:42:31.324326  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHHostname
	I0127 11:42:31.326993  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:31.327388  397538 main.go:141] libmachine: (no-preload-976043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:a3:49", ip: ""} in network mk-no-preload-976043: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:59 +0000 UTC Type:0 Mac:52:54:00:f9:a3:49 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:no-preload-976043 Clientid:01:52:54:00:f9:a3:49}
	I0127 11:42:31.327430  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined IP address 192.168.72.171 and MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:31.327562  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHPort
	I0127 11:42:31.327756  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHKeyPath
	I0127 11:42:31.327911  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHKeyPath
	I0127 11:42:31.328079  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHUsername
	I0127 11:42:31.328250  397538 main.go:141] libmachine: Using SSH client type: native
	I0127 11:42:31.328499  397538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.171 22 <nil> <nil>}
	I0127 11:42:31.328514  397538 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 11:42:31.441955  397538 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 11:42:31.441997  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetMachineName
	I0127 11:42:31.442241  397538 buildroot.go:166] provisioning hostname "no-preload-976043"
	I0127 11:42:31.442273  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetMachineName
	I0127 11:42:31.442470  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHHostname
	I0127 11:42:31.445399  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:31.445826  397538 main.go:141] libmachine: (no-preload-976043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:a3:49", ip: ""} in network mk-no-preload-976043: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:59 +0000 UTC Type:0 Mac:52:54:00:f9:a3:49 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:no-preload-976043 Clientid:01:52:54:00:f9:a3:49}
	I0127 11:42:31.445876  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined IP address 192.168.72.171 and MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:31.446028  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHPort
	I0127 11:42:31.446221  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHKeyPath
	I0127 11:42:31.446409  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHKeyPath
	I0127 11:42:31.446567  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHUsername
	I0127 11:42:31.446771  397538 main.go:141] libmachine: Using SSH client type: native
	I0127 11:42:31.447006  397538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.171 22 <nil> <nil>}
	I0127 11:42:31.447035  397538 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-976043 && echo "no-preload-976043" | sudo tee /etc/hostname
	I0127 11:42:31.580453  397538 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-976043
	
	I0127 11:42:31.580482  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHHostname
	I0127 11:42:31.583587  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:31.584015  397538 main.go:141] libmachine: (no-preload-976043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:a3:49", ip: ""} in network mk-no-preload-976043: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:59 +0000 UTC Type:0 Mac:52:54:00:f9:a3:49 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:no-preload-976043 Clientid:01:52:54:00:f9:a3:49}
	I0127 11:42:31.584048  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined IP address 192.168.72.171 and MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:31.584245  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHPort
	I0127 11:42:31.584455  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHKeyPath
	I0127 11:42:31.584634  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHKeyPath
	I0127 11:42:31.584792  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHUsername
	I0127 11:42:31.584993  397538 main.go:141] libmachine: Using SSH client type: native
	I0127 11:42:31.585198  397538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.171 22 <nil> <nil>}
	I0127 11:42:31.585214  397538 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-976043' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-976043/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-976043' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 11:42:31.716612  397538 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 11:42:31.716642  397538 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20319-348858/.minikube CaCertPath:/home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20319-348858/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20319-348858/.minikube}
	I0127 11:42:31.716670  397538 buildroot.go:174] setting up certificates
	I0127 11:42:31.716692  397538 provision.go:84] configureAuth start
	I0127 11:42:31.716705  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetMachineName
	I0127 11:42:31.716947  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetIP
	I0127 11:42:31.719415  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:31.719764  397538 main.go:141] libmachine: (no-preload-976043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:a3:49", ip: ""} in network mk-no-preload-976043: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:59 +0000 UTC Type:0 Mac:52:54:00:f9:a3:49 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:no-preload-976043 Clientid:01:52:54:00:f9:a3:49}
	I0127 11:42:31.719792  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined IP address 192.168.72.171 and MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:31.719947  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHHostname
	I0127 11:42:31.722391  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:31.722749  397538 main.go:141] libmachine: (no-preload-976043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:a3:49", ip: ""} in network mk-no-preload-976043: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:59 +0000 UTC Type:0 Mac:52:54:00:f9:a3:49 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:no-preload-976043 Clientid:01:52:54:00:f9:a3:49}
	I0127 11:42:31.722785  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined IP address 192.168.72.171 and MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:31.722889  397538 provision.go:143] copyHostCerts
	I0127 11:42:31.722959  397538 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-348858/.minikube/ca.pem, removing ...
	I0127 11:42:31.722983  397538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-348858/.minikube/ca.pem
	I0127 11:42:31.723052  397538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20319-348858/.minikube/ca.pem (1082 bytes)
	I0127 11:42:31.723254  397538 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-348858/.minikube/cert.pem, removing ...
	I0127 11:42:31.723269  397538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-348858/.minikube/cert.pem
	I0127 11:42:31.723310  397538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20319-348858/.minikube/cert.pem (1123 bytes)
	I0127 11:42:31.723433  397538 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-348858/.minikube/key.pem, removing ...
	I0127 11:42:31.723445  397538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-348858/.minikube/key.pem
	I0127 11:42:31.723472  397538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20319-348858/.minikube/key.pem (1679 bytes)
	I0127 11:42:31.723554  397538 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20319-348858/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca-key.pem org=jenkins.no-preload-976043 san=[127.0.0.1 192.168.72.171 localhost minikube no-preload-976043]
	I0127 11:42:31.833389  397538 provision.go:177] copyRemoteCerts
	I0127 11:42:31.833431  397538 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 11:42:31.833447  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHHostname
	I0127 11:42:31.835718  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:31.836017  397538 main.go:141] libmachine: (no-preload-976043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:a3:49", ip: ""} in network mk-no-preload-976043: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:59 +0000 UTC Type:0 Mac:52:54:00:f9:a3:49 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:no-preload-976043 Clientid:01:52:54:00:f9:a3:49}
	I0127 11:42:31.836052  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined IP address 192.168.72.171 and MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:31.836187  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHPort
	I0127 11:42:31.836346  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHKeyPath
	I0127 11:42:31.836440  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHUsername
	I0127 11:42:31.836572  397538 sshutil.go:53] new ssh client: &{IP:192.168.72.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/no-preload-976043/id_rsa Username:docker}
	I0127 11:42:31.921804  397538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0127 11:42:31.951153  397538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0127 11:42:31.975304  397538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 11:42:32.001708  397538 provision.go:87] duration metric: took 285.00385ms to configureAuth
	I0127 11:42:32.001735  397538 buildroot.go:189] setting minikube options for container-runtime
	I0127 11:42:32.001975  397538 config.go:182] Loaded profile config "no-preload-976043": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 11:42:32.001991  397538 machine.go:96] duration metric: took 677.901023ms to provisionDockerMachine
	I0127 11:42:32.002002  397538 start.go:293] postStartSetup for "no-preload-976043" (driver="kvm2")
	I0127 11:42:32.002016  397538 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 11:42:32.002050  397538 main.go:141] libmachine: (no-preload-976043) Calling .DriverName
	I0127 11:42:32.002346  397538 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 11:42:32.002381  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHHostname
	I0127 11:42:32.004762  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:32.005177  397538 main.go:141] libmachine: (no-preload-976043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:a3:49", ip: ""} in network mk-no-preload-976043: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:59 +0000 UTC Type:0 Mac:52:54:00:f9:a3:49 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:no-preload-976043 Clientid:01:52:54:00:f9:a3:49}
	I0127 11:42:32.005204  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined IP address 192.168.72.171 and MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:32.005363  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHPort
	I0127 11:42:32.005527  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHKeyPath
	I0127 11:42:32.005695  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHUsername
	I0127 11:42:32.005837  397538 sshutil.go:53] new ssh client: &{IP:192.168.72.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/no-preload-976043/id_rsa Username:docker}
	I0127 11:42:32.091456  397538 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 11:42:32.095413  397538 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 11:42:32.095437  397538 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-348858/.minikube/addons for local assets ...
	I0127 11:42:32.095495  397538 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-348858/.minikube/files for local assets ...
	I0127 11:42:32.095611  397538 filesync.go:149] local asset: /home/jenkins/minikube-integration/20319-348858/.minikube/files/etc/ssl/certs/3562042.pem -> 3562042.pem in /etc/ssl/certs
	I0127 11:42:32.095716  397538 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 11:42:32.104408  397538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/files/etc/ssl/certs/3562042.pem --> /etc/ssl/certs/3562042.pem (1708 bytes)
	I0127 11:42:32.132033  397538 start.go:296] duration metric: took 130.01876ms for postStartSetup
	I0127 11:42:32.132073  397538 fix.go:56] duration metric: took 18.757840228s for fixHost
	I0127 11:42:32.132095  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHHostname
	I0127 11:42:32.134785  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:32.135163  397538 main.go:141] libmachine: (no-preload-976043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:a3:49", ip: ""} in network mk-no-preload-976043: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:59 +0000 UTC Type:0 Mac:52:54:00:f9:a3:49 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:no-preload-976043 Clientid:01:52:54:00:f9:a3:49}
	I0127 11:42:32.135207  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined IP address 192.168.72.171 and MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:32.135362  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHPort
	I0127 11:42:32.135547  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHKeyPath
	I0127 11:42:32.135716  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHKeyPath
	I0127 11:42:32.135842  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHUsername
	I0127 11:42:32.136011  397538 main.go:141] libmachine: Using SSH client type: native
	I0127 11:42:32.136169  397538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.171 22 <nil> <nil>}
	I0127 11:42:32.136179  397538 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 11:42:32.254483  397538 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737978152.223308330
	
	I0127 11:42:32.254513  397538 fix.go:216] guest clock: 1737978152.223308330
	I0127 11:42:32.254523  397538 fix.go:229] Guest: 2025-01-27 11:42:32.22330833 +0000 UTC Remote: 2025-01-27 11:42:32.132078506 +0000 UTC m=+29.072718026 (delta=91.229824ms)
	I0127 11:42:32.254550  397538 fix.go:200] guest clock delta is within tolerance: 91.229824ms
	I0127 11:42:32.254569  397538 start.go:83] releasing machines lock for "no-preload-976043", held for 18.8803625s
	I0127 11:42:32.254605  397538 main.go:141] libmachine: (no-preload-976043) Calling .DriverName
	I0127 11:42:32.254908  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetIP
	I0127 11:42:32.257822  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:32.258236  397538 main.go:141] libmachine: (no-preload-976043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:a3:49", ip: ""} in network mk-no-preload-976043: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:59 +0000 UTC Type:0 Mac:52:54:00:f9:a3:49 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:no-preload-976043 Clientid:01:52:54:00:f9:a3:49}
	I0127 11:42:32.258285  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined IP address 192.168.72.171 and MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:32.258394  397538 main.go:141] libmachine: (no-preload-976043) Calling .DriverName
	I0127 11:42:32.258871  397538 main.go:141] libmachine: (no-preload-976043) Calling .DriverName
	I0127 11:42:32.259051  397538 main.go:141] libmachine: (no-preload-976043) Calling .DriverName
	I0127 11:42:32.259220  397538 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 11:42:32.259249  397538 ssh_runner.go:195] Run: cat /version.json
	I0127 11:42:32.259275  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHHostname
	I0127 11:42:32.259288  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHHostname
	I0127 11:42:32.262161  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:32.262395  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:32.262559  397538 main.go:141] libmachine: (no-preload-976043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:a3:49", ip: ""} in network mk-no-preload-976043: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:59 +0000 UTC Type:0 Mac:52:54:00:f9:a3:49 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:no-preload-976043 Clientid:01:52:54:00:f9:a3:49}
	I0127 11:42:32.262581  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined IP address 192.168.72.171 and MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:32.262752  397538 main.go:141] libmachine: (no-preload-976043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:a3:49", ip: ""} in network mk-no-preload-976043: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:59 +0000 UTC Type:0 Mac:52:54:00:f9:a3:49 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:no-preload-976043 Clientid:01:52:54:00:f9:a3:49}
	I0127 11:42:32.262779  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined IP address 192.168.72.171 and MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:32.262815  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHPort
	I0127 11:42:32.262996  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHPort
	I0127 11:42:32.263004  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHKeyPath
	I0127 11:42:32.263132  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHKeyPath
	I0127 11:42:32.263184  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHUsername
	I0127 11:42:32.263268  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHUsername
	I0127 11:42:32.263389  397538 sshutil.go:53] new ssh client: &{IP:192.168.72.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/no-preload-976043/id_rsa Username:docker}
	I0127 11:42:32.263411  397538 sshutil.go:53] new ssh client: &{IP:192.168.72.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/no-preload-976043/id_rsa Username:docker}
	I0127 11:42:32.352620  397538 ssh_runner.go:195] Run: systemctl --version
	I0127 11:42:32.377317  397538 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 11:42:32.385395  397538 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 11:42:32.385502  397538 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 11:42:32.407057  397538 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 11:42:32.407117  397538 start.go:495] detecting cgroup driver to use...
	I0127 11:42:32.407191  397538 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 11:42:32.446250  397538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 11:42:32.463378  397538 docker.go:217] disabling cri-docker service (if available) ...
	I0127 11:42:32.463426  397538 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 11:42:32.483338  397538 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 11:42:32.500144  397538 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 11:42:32.627382  397538 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 11:42:32.776344  397538 docker.go:233] disabling docker service ...
	I0127 11:42:32.776436  397538 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 11:42:32.794188  397538 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 11:42:32.805919  397538 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 11:42:32.949317  397538 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 11:42:33.103404  397538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 11:42:33.117381  397538 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 11:42:33.136381  397538 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 11:42:33.148097  397538 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 11:42:33.158937  397538 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 11:42:33.159019  397538 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 11:42:33.170771  397538 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 11:42:33.182634  397538 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 11:42:33.193218  397538 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 11:42:33.204370  397538 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 11:42:33.216100  397538 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 11:42:33.227506  397538 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 11:42:33.241630  397538 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 11:42:33.256006  397538 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 11:42:33.266448  397538 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 11:42:33.266499  397538 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 11:42:33.281767  397538 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 11:42:33.294330  397538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:42:33.435848  397538 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 11:42:33.472738  397538 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0127 11:42:33.472814  397538 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 11:42:33.480275  397538 retry.go:31] will retry after 867.114584ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0127 11:42:34.347647  397538 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 11:42:34.354396  397538 start.go:563] Will wait 60s for crictl version
	I0127 11:42:34.354470  397538 ssh_runner.go:195] Run: which crictl
	I0127 11:42:34.359304  397538 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 11:42:34.409380  397538 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0127 11:42:34.409467  397538 ssh_runner.go:195] Run: containerd --version
	I0127 11:42:34.446918  397538 ssh_runner.go:195] Run: containerd --version
	I0127 11:42:34.479052  397538 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	I0127 11:42:34.480411  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetIP
	I0127 11:42:34.483298  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:34.483754  397538 main.go:141] libmachine: (no-preload-976043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:a3:49", ip: ""} in network mk-no-preload-976043: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:59 +0000 UTC Type:0 Mac:52:54:00:f9:a3:49 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:no-preload-976043 Clientid:01:52:54:00:f9:a3:49}
	I0127 11:42:34.483792  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined IP address 192.168.72.171 and MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:42:34.484023  397538 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0127 11:42:34.489211  397538 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:42:34.507169  397538 kubeadm.go:883] updating cluster {Name:no-preload-976043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-976043 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.171 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 11:42:34.507326  397538 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 11:42:34.507375  397538 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:42:34.545992  397538 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 11:42:34.546022  397538 cache_images.go:84] Images are preloaded, skipping loading
	I0127 11:42:34.546033  397538 kubeadm.go:934] updating node { 192.168.72.171 8443 v1.32.1 containerd true true} ...
	I0127 11:42:34.546165  397538 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-976043 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.171
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:no-preload-976043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 11:42:34.546245  397538 ssh_runner.go:195] Run: sudo crictl info
	I0127 11:42:34.584023  397538 cni.go:84] Creating CNI manager for ""
	I0127 11:42:34.584050  397538 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 11:42:34.584063  397538 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 11:42:34.584095  397538 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.171 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-976043 NodeName:no-preload-976043 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.171"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.171 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 11:42:34.584295  397538 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.171
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-976043"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.171"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.171"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 11:42:34.584375  397538 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 11:42:34.596555  397538 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 11:42:34.596623  397538 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 11:42:34.609790  397538 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0127 11:42:34.630604  397538 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 11:42:34.647666  397538 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2313 bytes)
	I0127 11:42:34.667837  397538 ssh_runner.go:195] Run: grep 192.168.72.171	control-plane.minikube.internal$ /etc/hosts
	I0127 11:42:34.671757  397538 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.171	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:42:34.688584  397538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:42:34.820236  397538 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:42:34.843186  397538 certs.go:68] Setting up /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043 for IP: 192.168.72.171
	I0127 11:42:34.843216  397538 certs.go:194] generating shared ca certs ...
	I0127 11:42:34.843239  397538 certs.go:226] acquiring lock for ca certs: {Name:mkd458666dacb6826c0d92f860c3c2133957f34f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:42:34.843444  397538 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20319-348858/.minikube/ca.key
	I0127 11:42:34.843494  397538 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20319-348858/.minikube/proxy-client-ca.key
	I0127 11:42:34.843503  397538 certs.go:256] generating profile certs ...
	I0127 11:42:34.843580  397538 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/client.key
	I0127 11:42:34.843655  397538 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/apiserver.key.6127f777
	I0127 11:42:34.843711  397538 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/proxy-client.key
	I0127 11:42:34.843854  397538 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/356204.pem (1338 bytes)
	W0127 11:42:34.843887  397538 certs.go:480] ignoring /home/jenkins/minikube-integration/20319-348858/.minikube/certs/356204_empty.pem, impossibly tiny 0 bytes
	I0127 11:42:34.843909  397538 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 11:42:34.843952  397538 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca.pem (1082 bytes)
	I0127 11:42:34.843978  397538 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/cert.pem (1123 bytes)
	I0127 11:42:34.843999  397538 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/key.pem (1679 bytes)
	I0127 11:42:34.844039  397538 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-348858/.minikube/files/etc/ssl/certs/3562042.pem (1708 bytes)
	I0127 11:42:34.844726  397538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 11:42:34.897545  397538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 11:42:34.930839  397538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 11:42:34.965272  397538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 11:42:34.993738  397538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0127 11:42:35.022783  397538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 11:42:35.049813  397538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 11:42:35.082422  397538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 11:42:35.111230  397538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 11:42:35.140492  397538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/certs/356204.pem --> /usr/share/ca-certificates/356204.pem (1338 bytes)
	I0127 11:42:35.169716  397538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/files/etc/ssl/certs/3562042.pem --> /usr/share/ca-certificates/3562042.pem (1708 bytes)
	I0127 11:42:35.193880  397538 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 11:42:35.213782  397538 ssh_runner.go:195] Run: openssl version
	I0127 11:42:35.220718  397538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 11:42:35.232357  397538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:42:35.238360  397538 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:42:35.238422  397538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:42:35.246146  397538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 11:42:35.260062  397538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/356204.pem && ln -fs /usr/share/ca-certificates/356204.pem /etc/ssl/certs/356204.pem"
	I0127 11:42:35.271431  397538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356204.pem
	I0127 11:42:35.275997  397538 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 10:40 /usr/share/ca-certificates/356204.pem
	I0127 11:42:35.276061  397538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356204.pem
	I0127 11:42:35.282125  397538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/356204.pem /etc/ssl/certs/51391683.0"
	I0127 11:42:35.295982  397538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3562042.pem && ln -fs /usr/share/ca-certificates/3562042.pem /etc/ssl/certs/3562042.pem"
	I0127 11:42:35.309951  397538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3562042.pem
	I0127 11:42:35.314700  397538 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 10:40 /usr/share/ca-certificates/3562042.pem
	I0127 11:42:35.314777  397538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3562042.pem
	I0127 11:42:35.320540  397538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3562042.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 11:42:35.334666  397538 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 11:42:35.340491  397538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 11:42:35.346356  397538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 11:42:35.353945  397538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 11:42:35.361660  397538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 11:42:35.368995  397538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 11:42:35.376407  397538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 11:42:35.383741  397538 kubeadm.go:392] StartCluster: {Name:no-preload-976043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-976043 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.171 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:42:35.383847  397538 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0127 11:42:35.383915  397538 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 11:42:35.436368  397538 cri.go:89] found id: "592526ef804938a6c1c336b289cd8827d738d6311e20cc1c6faea6e7a38ddafb"
	I0127 11:42:35.436386  397538 cri.go:89] found id: "092bc5ec03dc81a509614dd4608faacb928e67005952659d9331f62a97f079d9"
	I0127 11:42:35.436391  397538 cri.go:89] found id: "f845360ca5e3d739fde48598fe03a808590cbf150c4bf3148b318621f8d63d81"
	I0127 11:42:35.436396  397538 cri.go:89] found id: "cedb41d0de988e1cddd2b9e34ef09066434b9415da107f3ec047f2981ee476ca"
	I0127 11:42:35.436399  397538 cri.go:89] found id: "2aa7389ef61cc9d25cd698ba69252c55f65a55700ce26de817ee1de43120108c"
	I0127 11:42:35.436404  397538 cri.go:89] found id: "3fd3c19397b2e924ba0e4556f2c9377eccdc58314ca8d2bdcf32db10b478ae41"
	I0127 11:42:35.436409  397538 cri.go:89] found id: "4c1ad43ef803c9766b730638f334f3a0c9a8d763435da1e2ffb842c2761df8ec"
	I0127 11:42:35.436413  397538 cri.go:89] found id: "7fe1f69096846344beae6da8d2abc2e0ced625ec110150d7398131c8ba421daa"
	I0127 11:42:35.436418  397538 cri.go:89] found id: ""
	I0127 11:42:35.436461  397538 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0127 11:42:35.454110  397538 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-27T11:42:35Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0127 11:42:35.454187  397538 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 11:42:35.464979  397538 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 11:42:35.464999  397538 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 11:42:35.465040  397538 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 11:42:35.477379  397538 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 11:42:35.478050  397538 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-976043" does not appear in /home/jenkins/minikube-integration/20319-348858/kubeconfig
	I0127 11:42:35.478400  397538 kubeconfig.go:62] /home/jenkins/minikube-integration/20319-348858/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-976043" cluster setting kubeconfig missing "no-preload-976043" context setting]
	I0127 11:42:35.478926  397538 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-348858/kubeconfig: {Name:mk12891275228a2835a35659c2ede45028f0a576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:42:35.480345  397538 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 11:42:35.491256  397538 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.171
	I0127 11:42:35.491286  397538 kubeadm.go:1160] stopping kube-system containers ...
	I0127 11:42:35.491301  397538 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0127 11:42:35.491346  397538 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 11:42:35.533388  397538 cri.go:89] found id: "592526ef804938a6c1c336b289cd8827d738d6311e20cc1c6faea6e7a38ddafb"
	I0127 11:42:35.533416  397538 cri.go:89] found id: "092bc5ec03dc81a509614dd4608faacb928e67005952659d9331f62a97f079d9"
	I0127 11:42:35.533422  397538 cri.go:89] found id: "f845360ca5e3d739fde48598fe03a808590cbf150c4bf3148b318621f8d63d81"
	I0127 11:42:35.533428  397538 cri.go:89] found id: "cedb41d0de988e1cddd2b9e34ef09066434b9415da107f3ec047f2981ee476ca"
	I0127 11:42:35.533432  397538 cri.go:89] found id: "2aa7389ef61cc9d25cd698ba69252c55f65a55700ce26de817ee1de43120108c"
	I0127 11:42:35.533438  397538 cri.go:89] found id: "3fd3c19397b2e924ba0e4556f2c9377eccdc58314ca8d2bdcf32db10b478ae41"
	I0127 11:42:35.533453  397538 cri.go:89] found id: "4c1ad43ef803c9766b730638f334f3a0c9a8d763435da1e2ffb842c2761df8ec"
	I0127 11:42:35.533458  397538 cri.go:89] found id: "7fe1f69096846344beae6da8d2abc2e0ced625ec110150d7398131c8ba421daa"
	I0127 11:42:35.533462  397538 cri.go:89] found id: ""
	I0127 11:42:35.533469  397538 cri.go:252] Stopping containers: [592526ef804938a6c1c336b289cd8827d738d6311e20cc1c6faea6e7a38ddafb 092bc5ec03dc81a509614dd4608faacb928e67005952659d9331f62a97f079d9 f845360ca5e3d739fde48598fe03a808590cbf150c4bf3148b318621f8d63d81 cedb41d0de988e1cddd2b9e34ef09066434b9415da107f3ec047f2981ee476ca 2aa7389ef61cc9d25cd698ba69252c55f65a55700ce26de817ee1de43120108c 3fd3c19397b2e924ba0e4556f2c9377eccdc58314ca8d2bdcf32db10b478ae41 4c1ad43ef803c9766b730638f334f3a0c9a8d763435da1e2ffb842c2761df8ec 7fe1f69096846344beae6da8d2abc2e0ced625ec110150d7398131c8ba421daa]
	I0127 11:42:35.533525  397538 ssh_runner.go:195] Run: which crictl
	I0127 11:42:35.537866  397538 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 592526ef804938a6c1c336b289cd8827d738d6311e20cc1c6faea6e7a38ddafb 092bc5ec03dc81a509614dd4608faacb928e67005952659d9331f62a97f079d9 f845360ca5e3d739fde48598fe03a808590cbf150c4bf3148b318621f8d63d81 cedb41d0de988e1cddd2b9e34ef09066434b9415da107f3ec047f2981ee476ca 2aa7389ef61cc9d25cd698ba69252c55f65a55700ce26de817ee1de43120108c 3fd3c19397b2e924ba0e4556f2c9377eccdc58314ca8d2bdcf32db10b478ae41 4c1ad43ef803c9766b730638f334f3a0c9a8d763435da1e2ffb842c2761df8ec 7fe1f69096846344beae6da8d2abc2e0ced625ec110150d7398131c8ba421daa
	I0127 11:42:35.577379  397538 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 11:42:35.594728  397538 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:42:35.605636  397538 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:42:35.605659  397538 kubeadm.go:157] found existing configuration files:
	
	I0127 11:42:35.605702  397538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:42:35.617924  397538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:42:35.617977  397538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:42:35.630441  397538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:42:35.640581  397538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:42:35.640628  397538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:42:35.650822  397538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:42:35.662986  397538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:42:35.663034  397538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:42:35.675243  397538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:42:35.687128  397538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:42:35.687177  397538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:42:35.699749  397538 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:42:35.712592  397538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:42:35.847944  397538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:42:36.870825  397538 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.022832506s)
	I0127 11:42:36.870862  397538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:42:37.118281  397538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:42:37.230184  397538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:42:37.368659  397538 api_server.go:52] waiting for apiserver process to appear ...
	I0127 11:42:37.368754  397538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:42:37.868881  397538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:42:38.369735  397538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:42:38.391274  397538 api_server.go:72] duration metric: took 1.022614421s to wait for apiserver process to appear ...
	I0127 11:42:38.391309  397538 api_server.go:88] waiting for apiserver healthz status ...
	I0127 11:42:38.391336  397538 api_server.go:253] Checking apiserver healthz at https://192.168.72.171:8443/healthz ...
	I0127 11:42:38.391882  397538 api_server.go:269] stopped: https://192.168.72.171:8443/healthz: Get "https://192.168.72.171:8443/healthz": dial tcp 192.168.72.171:8443: connect: connection refused
	I0127 11:42:38.892089  397538 api_server.go:253] Checking apiserver healthz at https://192.168.72.171:8443/healthz ...
	I0127 11:42:41.606183  397538 api_server.go:279] https://192.168.72.171:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 11:42:41.606224  397538 api_server.go:103] status: https://192.168.72.171:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 11:42:41.606249  397538 api_server.go:253] Checking apiserver healthz at https://192.168.72.171:8443/healthz ...
	I0127 11:42:41.637709  397538 api_server.go:279] https://192.168.72.171:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 11:42:41.637740  397538 api_server.go:103] status: https://192.168.72.171:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 11:42:41.892205  397538 api_server.go:253] Checking apiserver healthz at https://192.168.72.171:8443/healthz ...
	I0127 11:42:41.901055  397538 api_server.go:279] https://192.168.72.171:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 11:42:41.901097  397538 api_server.go:103] status: https://192.168.72.171:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 11:42:42.391537  397538 api_server.go:253] Checking apiserver healthz at https://192.168.72.171:8443/healthz ...
	I0127 11:42:42.401042  397538 api_server.go:279] https://192.168.72.171:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 11:42:42.401077  397538 api_server.go:103] status: https://192.168.72.171:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 11:42:42.891470  397538 api_server.go:253] Checking apiserver healthz at https://192.168.72.171:8443/healthz ...
	I0127 11:42:42.919612  397538 api_server.go:279] https://192.168.72.171:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 11:42:42.919641  397538 api_server.go:103] status: https://192.168.72.171:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 11:42:43.391456  397538 api_server.go:253] Checking apiserver healthz at https://192.168.72.171:8443/healthz ...
	I0127 11:42:43.397718  397538 api_server.go:279] https://192.168.72.171:8443/healthz returned 200:
	ok
	I0127 11:42:43.405498  397538 api_server.go:141] control plane version: v1.32.1
	I0127 11:42:43.405531  397538 api_server.go:131] duration metric: took 5.014213795s to wait for apiserver health ...
	I0127 11:42:43.405544  397538 cni.go:84] Creating CNI manager for ""
	I0127 11:42:43.405555  397538 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 11:42:43.407066  397538 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 11:42:43.408189  397538 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 11:42:43.421042  397538 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 11:42:43.441442  397538 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 11:42:43.453790  397538 system_pods.go:59] 8 kube-system pods found
	I0127 11:42:43.453826  397538 system_pods.go:61] "coredns-668d6bf9bc-kl7br" [4c9a4a3c-b46d-43ea-8ecb-13ad6e04d183] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 11:42:43.453833  397538 system_pods.go:61] "etcd-no-preload-976043" [bf71a082-71be-41b6-b3c9-662972866d48] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 11:42:43.453840  397538 system_pods.go:61] "kube-apiserver-no-preload-976043" [73449d58-727b-41f5-b151-5f2d84a608a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 11:42:43.453849  397538 system_pods.go:61] "kube-controller-manager-no-preload-976043" [f1cb08d8-d445-4ea9-b742-02cb993145e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 11:42:43.453858  397538 system_pods.go:61] "kube-proxy-hbtts" [5c3f5981-4c7c-4a09-b11e-5130a4bcc58b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0127 11:42:43.453872  397538 system_pods.go:61] "kube-scheduler-no-preload-976043" [71129e30-f010-47a1-94e2-da06808e6cac] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 11:42:43.453888  397538 system_pods.go:61] "metrics-server-f79f97bbb-kd26p" [331dbc70-7767-4514-bae7-7de96157962b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 11:42:43.453901  397538 system_pods.go:61] "storage-provisioner" [29f19d3c-f21f-48e5-8e94-1a62782873de] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 11:42:43.453913  397538 system_pods.go:74] duration metric: took 12.448185ms to wait for pod list to return data ...
	I0127 11:42:43.453929  397538 node_conditions.go:102] verifying NodePressure condition ...
	I0127 11:42:43.457750  397538 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 11:42:43.457779  397538 node_conditions.go:123] node cpu capacity is 2
	I0127 11:42:43.457793  397538 node_conditions.go:105] duration metric: took 3.853672ms to run NodePressure ...
	I0127 11:42:43.457815  397538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:42:43.795140  397538 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0127 11:42:43.803094  397538 kubeadm.go:739] kubelet initialised
	I0127 11:42:43.803117  397538 kubeadm.go:740] duration metric: took 7.947754ms waiting for restarted kubelet to initialise ...
	I0127 11:42:43.803128  397538 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:42:43.813516  397538 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-kl7br" in "kube-system" namespace to be "Ready" ...
	I0127 11:42:45.821221  397538 pod_ready.go:103] pod "coredns-668d6bf9bc-kl7br" in "kube-system" namespace has status "Ready":"False"
	I0127 11:42:48.320697  397538 pod_ready.go:103] pod "coredns-668d6bf9bc-kl7br" in "kube-system" namespace has status "Ready":"False"
	I0127 11:42:50.824188  397538 pod_ready.go:103] pod "coredns-668d6bf9bc-kl7br" in "kube-system" namespace has status "Ready":"False"
	I0127 11:42:51.822688  397538 pod_ready.go:93] pod "coredns-668d6bf9bc-kl7br" in "kube-system" namespace has status "Ready":"True"
	I0127 11:42:51.822715  397538 pod_ready.go:82] duration metric: took 8.009170425s for pod "coredns-668d6bf9bc-kl7br" in "kube-system" namespace to be "Ready" ...
	I0127 11:42:51.822726  397538 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-976043" in "kube-system" namespace to be "Ready" ...
	I0127 11:42:51.827704  397538 pod_ready.go:93] pod "etcd-no-preload-976043" in "kube-system" namespace has status "Ready":"True"
	I0127 11:42:51.827738  397538 pod_ready.go:82] duration metric: took 5.005165ms for pod "etcd-no-preload-976043" in "kube-system" namespace to be "Ready" ...
	I0127 11:42:51.827752  397538 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-976043" in "kube-system" namespace to be "Ready" ...
	I0127 11:42:51.832505  397538 pod_ready.go:93] pod "kube-apiserver-no-preload-976043" in "kube-system" namespace has status "Ready":"True"
	I0127 11:42:51.832530  397538 pod_ready.go:82] duration metric: took 4.76871ms for pod "kube-apiserver-no-preload-976043" in "kube-system" namespace to be "Ready" ...
	I0127 11:42:51.832543  397538 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-976043" in "kube-system" namespace to be "Ready" ...
	I0127 11:42:51.837246  397538 pod_ready.go:93] pod "kube-controller-manager-no-preload-976043" in "kube-system" namespace has status "Ready":"True"
	I0127 11:42:51.837266  397538 pod_ready.go:82] duration metric: took 4.715561ms for pod "kube-controller-manager-no-preload-976043" in "kube-system" namespace to be "Ready" ...
	I0127 11:42:51.837275  397538 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hbtts" in "kube-system" namespace to be "Ready" ...
	I0127 11:42:51.841769  397538 pod_ready.go:93] pod "kube-proxy-hbtts" in "kube-system" namespace has status "Ready":"True"
	I0127 11:42:51.841789  397538 pod_ready.go:82] duration metric: took 4.507355ms for pod "kube-proxy-hbtts" in "kube-system" namespace to be "Ready" ...
	I0127 11:42:51.841808  397538 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-976043" in "kube-system" namespace to be "Ready" ...
	I0127 11:42:52.218667  397538 pod_ready.go:93] pod "kube-scheduler-no-preload-976043" in "kube-system" namespace has status "Ready":"True"
	I0127 11:42:52.218697  397538 pod_ready.go:82] duration metric: took 376.878504ms for pod "kube-scheduler-no-preload-976043" in "kube-system" namespace to be "Ready" ...
	I0127 11:42:52.218713  397538 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace to be "Ready" ...
	I0127 11:42:54.227099  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:42:56.730104  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:42:58.731158  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:01.226937  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:03.227455  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:05.725903  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:08.224728  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:10.225751  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:12.226333  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:14.724890  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:17.227515  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:19.726498  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:22.226150  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:24.724857  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:27.225563  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:29.225653  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:31.725147  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:33.725374  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:36.225540  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:38.724572  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:41.224062  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:43.225491  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:45.723890  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:47.724570  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:49.724802  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:51.725104  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:54.224681  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:56.724258  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:58.726811  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:01.225445  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:03.225504  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:05.225804  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:07.724625  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:09.725469  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:12.226469  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:14.724112  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:16.724412  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:18.725198  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:20.725929  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:23.226983  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:25.724639  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:27.725194  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:30.223869  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:32.225658  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:34.724879  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:37.228509  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:39.725386  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:42.225001  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:44.725533  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:47.226857  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:49.724167  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:51.725505  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:53.726155  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:56.225365  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:58.724700  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:00.724747  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:03.226195  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:05.723646  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:07.724134  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:09.725928  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:12.225252  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:14.724086  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:16.724383  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:18.725324  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:21.225304  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:23.226569  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:25.724694  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:27.725948  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:29.725998  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:32.225036  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:34.226745  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:36.725662  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:39.226109  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:41.729561  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:44.225033  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:46.226354  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:48.723795  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:50.724244  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:52.725214  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:55.224770  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:57.225423  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:59.725101  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:02.225903  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:04.725305  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:06.727299  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:09.225730  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:11.725343  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:14.226106  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:16.226336  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:18.226656  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:20.728233  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:23.225330  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:25.225642  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:27.725596  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:30.225271  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:32.226910  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:34.725753  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:36.726023  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:38.726555  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:41.224361  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:43.226049  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:45.226221  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:47.226574  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:49.732759  397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:52.219144  397538 pod_ready.go:82] duration metric: took 4m0.000395098s for pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace to be "Ready" ...
	E0127 11:46:52.219176  397538 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 11:46:52.219202  397538 pod_ready.go:39] duration metric: took 4m8.416062213s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:46:52.219242  397538 kubeadm.go:597] duration metric: took 4m16.754235764s to restartPrimaryControlPlane
	W0127 11:46:52.219339  397538 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 11:46:52.219373  397538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 11:46:54.231110  397538 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.011708362s)
	I0127 11:46:54.231201  397538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:46:54.245569  397538 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:46:54.255544  397538 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:46:54.265103  397538 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:46:54.265122  397538 kubeadm.go:157] found existing configuration files:
	
	I0127 11:46:54.265162  397538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:46:54.274787  397538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:46:54.274845  397538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:46:54.284700  397538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:46:54.296043  397538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:46:54.296094  397538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:46:54.306687  397538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:46:54.316592  397538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:46:54.316634  397538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:46:54.327048  397538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:46:54.336484  397538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:46:54.336575  397538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:46:54.346187  397538 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 11:46:54.517349  397538 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 11:47:02.511703  397538 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 11:47:02.511780  397538 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 11:47:02.511862  397538 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 11:47:02.511994  397538 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 11:47:02.512101  397538 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 11:47:02.512189  397538 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 11:47:02.513436  397538 out.go:235]   - Generating certificates and keys ...
	I0127 11:47:02.513528  397538 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 11:47:02.513639  397538 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 11:47:02.513744  397538 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 11:47:02.513819  397538 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 11:47:02.513915  397538 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 11:47:02.514010  397538 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 11:47:02.514099  397538 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 11:47:02.514179  397538 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 11:47:02.514281  397538 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 11:47:02.514398  397538 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 11:47:02.514464  397538 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 11:47:02.514567  397538 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 11:47:02.514655  397538 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 11:47:02.514739  397538 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 11:47:02.514817  397538 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 11:47:02.514903  397538 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 11:47:02.514993  397538 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 11:47:02.515101  397538 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 11:47:02.515191  397538 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 11:47:02.516275  397538 out.go:235]   - Booting up control plane ...
	I0127 11:47:02.516383  397538 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 11:47:02.516486  397538 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 11:47:02.516570  397538 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 11:47:02.516721  397538 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 11:47:02.516858  397538 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 11:47:02.516915  397538 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 11:47:02.517091  397538 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 11:47:02.517220  397538 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 11:47:02.517310  397538 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.728303ms
	I0127 11:47:02.517411  397538 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 11:47:02.517497  397538 kubeadm.go:310] [api-check] The API server is healthy after 5.002592339s
	I0127 11:47:02.517660  397538 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 11:47:02.517804  397538 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 11:47:02.517892  397538 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 11:47:02.518080  397538 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-976043 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 11:47:02.518169  397538 kubeadm.go:310] [bootstrap-token] Using token: dgvydd.xna4ynr2hbmwtuzw
	I0127 11:47:02.519545  397538 out.go:235]   - Configuring RBAC rules ...
	I0127 11:47:02.519669  397538 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 11:47:02.519772  397538 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 11:47:02.519947  397538 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 11:47:02.520118  397538 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 11:47:02.520289  397538 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 11:47:02.520423  397538 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 11:47:02.520574  397538 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 11:47:02.520643  397538 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 11:47:02.520712  397538 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 11:47:02.520721  397538 kubeadm.go:310] 
	I0127 11:47:02.520812  397538 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 11:47:02.520827  397538 kubeadm.go:310] 
	I0127 11:47:02.520934  397538 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 11:47:02.520947  397538 kubeadm.go:310] 
	I0127 11:47:02.520980  397538 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 11:47:02.521067  397538 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 11:47:02.521152  397538 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 11:47:02.521167  397538 kubeadm.go:310] 
	I0127 11:47:02.521247  397538 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 11:47:02.521256  397538 kubeadm.go:310] 
	I0127 11:47:02.521333  397538 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 11:47:02.521342  397538 kubeadm.go:310] 
	I0127 11:47:02.521417  397538 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 11:47:02.521541  397538 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 11:47:02.521665  397538 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 11:47:02.521676  397538 kubeadm.go:310] 
	I0127 11:47:02.521779  397538 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 11:47:02.521880  397538 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 11:47:02.521889  397538 kubeadm.go:310] 
	I0127 11:47:02.522019  397538 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token dgvydd.xna4ynr2hbmwtuzw \
	I0127 11:47:02.522168  397538 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c769a71fa2072963699012a67c9bb4b27b6fc88b52aea51191b7b2189ca81982 \
	I0127 11:47:02.522200  397538 kubeadm.go:310] 	--control-plane 
	I0127 11:47:02.522216  397538 kubeadm.go:310] 
	I0127 11:47:02.522326  397538 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 11:47:02.522336  397538 kubeadm.go:310] 
	I0127 11:47:02.522448  397538 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token dgvydd.xna4ynr2hbmwtuzw \
	I0127 11:47:02.522601  397538 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c769a71fa2072963699012a67c9bb4b27b6fc88b52aea51191b7b2189ca81982 
	I0127 11:47:02.522616  397538 cni.go:84] Creating CNI manager for ""
	I0127 11:47:02.522625  397538 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 11:47:02.524672  397538 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 11:47:02.525706  397538 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 11:47:02.538650  397538 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 11:47:02.566811  397538 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 11:47:02.566893  397538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:02.566922  397538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-976043 minikube.k8s.io/updated_at=2025_01_27T11_47_02_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=35c230aa12d4986001aef5f6e29069f3bc5493aa minikube.k8s.io/name=no-preload-976043 minikube.k8s.io/primary=true
	I0127 11:47:02.811376  397538 ops.go:34] apiserver oom_adj: -16
	I0127 11:47:02.811527  397538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:03.312022  397538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:03.812210  397538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:04.311782  397538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:04.812533  397538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:05.312605  397538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:05.812482  397538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:06.311649  397538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:06.811846  397538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:06.921923  397538 kubeadm.go:1113] duration metric: took 4.355092744s to wait for elevateKubeSystemPrivileges
	I0127 11:47:06.921957  397538 kubeadm.go:394] duration metric: took 4m31.538223966s to StartCluster
	I0127 11:47:06.921979  397538 settings.go:142] acquiring lock: {Name:mkb277d193c8888d23a77778c65f322a69e59091 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:47:06.922096  397538 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20319-348858/kubeconfig
	I0127 11:47:06.923598  397538 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-348858/kubeconfig: {Name:mk12891275228a2835a35659c2ede45028f0a576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:47:06.923858  397538 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.171 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 11:47:06.923968  397538 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 11:47:06.924085  397538 config.go:182] Loaded profile config "no-preload-976043": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 11:47:06.924072  397538 addons.go:69] Setting storage-provisioner=true in profile "no-preload-976043"
	I0127 11:47:06.924104  397538 addons.go:69] Setting dashboard=true in profile "no-preload-976043"
	I0127 11:47:06.924119  397538 addons.go:69] Setting metrics-server=true in profile "no-preload-976043"
	I0127 11:47:06.924126  397538 addons.go:238] Setting addon storage-provisioner=true in "no-preload-976043"
	I0127 11:47:06.924132  397538 addons.go:238] Setting addon dashboard=true in "no-preload-976043"
	I0127 11:47:06.924133  397538 addons.go:238] Setting addon metrics-server=true in "no-preload-976043"
	W0127 11:47:06.924136  397538 addons.go:247] addon storage-provisioner should already be in state true
	W0127 11:47:06.924142  397538 addons.go:247] addon dashboard should already be in state true
	W0127 11:47:06.924150  397538 addons.go:247] addon metrics-server should already be in state true
	I0127 11:47:06.924096  397538 addons.go:69] Setting default-storageclass=true in profile "no-preload-976043"
	I0127 11:47:06.924211  397538 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-976043"
	I0127 11:47:06.924182  397538 host.go:66] Checking if "no-preload-976043" exists ...
	I0127 11:47:06.924182  397538 host.go:66] Checking if "no-preload-976043" exists ...
	I0127 11:47:06.924182  397538 host.go:66] Checking if "no-preload-976043" exists ...
	I0127 11:47:06.924663  397538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:47:06.924717  397538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:06.924792  397538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:47:06.924802  397538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:47:06.924817  397538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:06.924838  397538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:06.924951  397538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:47:06.924994  397538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:06.927481  397538 out.go:177] * Verifying Kubernetes components...
	I0127 11:47:06.928798  397538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:47:06.944266  397538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43989
	I0127 11:47:06.944533  397538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36299
	I0127 11:47:06.944635  397538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43463
	I0127 11:47:06.944782  397538 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:06.945085  397538 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:06.945253  397538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37173
	I0127 11:47:06.945607  397538 main.go:141] libmachine: Using API Version  1
	I0127 11:47:06.945646  397538 main.go:141] libmachine: Using API Version  1
	I0127 11:47:06.945671  397538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:06.945722  397538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:06.946041  397538 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:06.946145  397538 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:06.946203  397538 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:06.946623  397538 main.go:141] libmachine: Using API Version  1
	I0127 11:47:06.946643  397538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:06.946742  397538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:47:06.946786  397538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:06.946951  397538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:47:06.946994  397538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:06.947206  397538 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:06.947222  397538 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:06.947394  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetState
	I0127 11:47:06.947712  397538 main.go:141] libmachine: Using API Version  1
	I0127 11:47:06.947736  397538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:06.948119  397538 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:06.948785  397538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:47:06.948846  397538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:06.951738  397538 addons.go:238] Setting addon default-storageclass=true in "no-preload-976043"
	W0127 11:47:06.951759  397538 addons.go:247] addon default-storageclass should already be in state true
	I0127 11:47:06.951791  397538 host.go:66] Checking if "no-preload-976043" exists ...
	I0127 11:47:06.952140  397538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:47:06.952171  397538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:06.973135  397538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45689
	I0127 11:47:06.973855  397538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36191
	I0127 11:47:06.974102  397538 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:06.974240  397538 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:06.974748  397538 main.go:141] libmachine: Using API Version  1
	I0127 11:47:06.974769  397538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:06.974883  397538 main.go:141] libmachine: Using API Version  1
	I0127 11:47:06.974902  397538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:06.975329  397538 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:06.975608  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetState
	I0127 11:47:06.977046  397538 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:06.977341  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetState
	I0127 11:47:06.979372  397538 main.go:141] libmachine: (no-preload-976043) Calling .DriverName
	I0127 11:47:06.979929  397538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35977
	I0127 11:47:06.980128  397538 main.go:141] libmachine: (no-preload-976043) Calling .DriverName
	I0127 11:47:06.980305  397538 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:06.980939  397538 main.go:141] libmachine: Using API Version  1
	I0127 11:47:06.980953  397538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:06.981201  397538 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 11:47:06.981499  397538 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:06.981883  397538 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:47:06.982169  397538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:47:06.982227  397538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:06.983298  397538 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:47:06.983322  397538 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 11:47:06.983344  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHHostname
	I0127 11:47:06.983857  397538 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 11:47:06.985635  397538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45855
	I0127 11:47:06.986281  397538 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:06.986637  397538 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 11:47:06.986661  397538 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 11:47:06.986683  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHHostname
	I0127 11:47:06.987067  397538 main.go:141] libmachine: Using API Version  1
	I0127 11:47:06.987084  397538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:06.987615  397538 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:06.987933  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetState
	I0127 11:47:06.991679  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:47:06.992043  397538 main.go:141] libmachine: (no-preload-976043) Calling .DriverName
	I0127 11:47:06.992369  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:47:06.992880  397538 main.go:141] libmachine: (no-preload-976043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:a3:49", ip: ""} in network mk-no-preload-976043: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:59 +0000 UTC Type:0 Mac:52:54:00:f9:a3:49 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:no-preload-976043 Clientid:01:52:54:00:f9:a3:49}
	I0127 11:47:06.992905  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined IP address 192.168.72.171 and MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:47:06.993076  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHPort
	I0127 11:47:06.993192  397538 main.go:141] libmachine: (no-preload-976043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:a3:49", ip: ""} in network mk-no-preload-976043: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:59 +0000 UTC Type:0 Mac:52:54:00:f9:a3:49 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:no-preload-976043 Clientid:01:52:54:00:f9:a3:49}
	I0127 11:47:06.993217  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined IP address 192.168.72.171 and MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:47:06.993263  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHKeyPath
	I0127 11:47:06.993421  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHUsername
	I0127 11:47:06.993568  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHPort
	I0127 11:47:06.993630  397538 sshutil.go:53] new ssh client: &{IP:192.168.72.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/no-preload-976043/id_rsa Username:docker}
	I0127 11:47:06.993759  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHKeyPath
	I0127 11:47:06.993894  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHUsername
	I0127 11:47:06.994030  397538 sshutil.go:53] new ssh client: &{IP:192.168.72.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/no-preload-976043/id_rsa Username:docker}
	I0127 11:47:07.001313  397538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35491
	I0127 11:47:07.001677  397538 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:07.002144  397538 main.go:141] libmachine: Using API Version  1
	I0127 11:47:07.002158  397538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:07.002626  397538 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:07.002804  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetState
	I0127 11:47:07.004433  397538 main.go:141] libmachine: (no-preload-976043) Calling .DriverName
	I0127 11:47:07.004630  397538 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 11:47:07.004654  397538 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 11:47:07.004666  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHHostname
	I0127 11:47:07.007710  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:47:07.008211  397538 main.go:141] libmachine: (no-preload-976043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:a3:49", ip: ""} in network mk-no-preload-976043: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:59 +0000 UTC Type:0 Mac:52:54:00:f9:a3:49 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:no-preload-976043 Clientid:01:52:54:00:f9:a3:49}
	I0127 11:47:07.008307  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined IP address 192.168.72.171 and MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:47:07.008552  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHPort
	I0127 11:47:07.008724  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHKeyPath
	I0127 11:47:07.008884  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHUsername
	I0127 11:47:07.009008  397538 sshutil.go:53] new ssh client: &{IP:192.168.72.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/no-preload-976043/id_rsa Username:docker}
	I0127 11:47:07.017633  397538 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 11:47:07.018862  397538 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 11:47:07.018884  397538 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 11:47:07.018906  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHHostname
	I0127 11:47:07.022158  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:47:07.022759  397538 main.go:141] libmachine: (no-preload-976043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:a3:49", ip: ""} in network mk-no-preload-976043: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:59 +0000 UTC Type:0 Mac:52:54:00:f9:a3:49 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:no-preload-976043 Clientid:01:52:54:00:f9:a3:49}
	I0127 11:47:07.022784  397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined IP address 192.168.72.171 and MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
	I0127 11:47:07.022955  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHPort
	I0127 11:47:07.023096  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHKeyPath
	I0127 11:47:07.023241  397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHUsername
	I0127 11:47:07.023384  397538 sshutil.go:53] new ssh client: &{IP:192.168.72.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/no-preload-976043/id_rsa Username:docker}
	I0127 11:47:07.214231  397538 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:47:07.237881  397538 node_ready.go:35] waiting up to 6m0s for node "no-preload-976043" to be "Ready" ...
	I0127 11:47:07.263158  397538 node_ready.go:49] node "no-preload-976043" has status "Ready":"True"
	I0127 11:47:07.263185  397538 node_ready.go:38] duration metric: took 25.243171ms for node "no-preload-976043" to be "Ready" ...
	I0127 11:47:07.263198  397538 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:47:07.270196  397538 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-5cktj" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:07.341301  397538 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:47:07.358210  397538 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 11:47:07.358235  397538 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 11:47:07.360985  397538 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 11:47:07.381453  397538 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 11:47:07.381492  397538 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 11:47:07.466768  397538 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 11:47:07.466802  397538 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 11:47:07.493189  397538 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 11:47:07.493219  397538 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 11:47:07.713486  397538 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 11:47:07.713521  397538 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 11:47:07.724092  397538 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 11:47:07.724125  397538 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 11:47:07.769193  397538 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 11:47:07.769227  397538 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 11:47:07.846823  397538 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 11:47:07.935651  397538 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 11:47:07.935684  397538 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 11:47:08.146639  397538 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 11:47:08.146679  397538 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 11:47:08.296867  397538 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 11:47:08.296901  397538 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 11:47:08.392971  397538 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 11:47:08.393017  397538 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 11:47:08.479861  397538 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 11:47:08.479897  397538 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 11:47:08.678114  397538 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 11:47:08.977235  397538 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.63589482s)
	I0127 11:47:08.977301  397538 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:08.977323  397538 main.go:141] libmachine: (no-preload-976043) Calling .Close
	I0127 11:47:08.977243  397538 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.616166623s)
	I0127 11:47:08.977402  397538 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:08.977422  397538 main.go:141] libmachine: (no-preload-976043) Calling .Close
	I0127 11:47:08.977652  397538 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:08.977694  397538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:08.977710  397538 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:08.977720  397538 main.go:141] libmachine: (no-preload-976043) Calling .Close
	I0127 11:47:08.977871  397538 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:08.977887  397538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:08.977896  397538 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:08.977904  397538 main.go:141] libmachine: (no-preload-976043) Calling .Close
	I0127 11:47:08.978211  397538 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:08.978228  397538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:08.979829  397538 main.go:141] libmachine: (no-preload-976043) DBG | Closing plugin on server side
	I0127 11:47:08.979875  397538 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:08.979882  397538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:09.000588  397538 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:09.000611  397538 main.go:141] libmachine: (no-preload-976043) Calling .Close
	I0127 11:47:09.000859  397538 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:09.000880  397538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:09.000894  397538 main.go:141] libmachine: (no-preload-976043) DBG | Closing plugin on server side
	I0127 11:47:09.324488  397538 pod_ready.go:93] pod "coredns-668d6bf9bc-5cktj" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:09.324521  397538 pod_ready.go:82] duration metric: took 2.054295919s for pod "coredns-668d6bf9bc-5cktj" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:09.324537  397538 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-kjqjk" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:09.402781  397538 pod_ready.go:93] pod "coredns-668d6bf9bc-kjqjk" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:09.402807  397538 pod_ready.go:82] duration metric: took 78.262484ms for pod "coredns-668d6bf9bc-kjqjk" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:09.402819  397538 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-976043" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:09.537430  397538 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.690554711s)
	I0127 11:47:09.537480  397538 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:09.537490  397538 main.go:141] libmachine: (no-preload-976043) Calling .Close
	I0127 11:47:09.537841  397538 main.go:141] libmachine: (no-preload-976043) DBG | Closing plugin on server side
	I0127 11:47:09.537922  397538 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:09.537948  397538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:09.537959  397538 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:09.537968  397538 main.go:141] libmachine: (no-preload-976043) Calling .Close
	I0127 11:47:09.538230  397538 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:09.538246  397538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:09.538257  397538 addons.go:479] Verifying addon metrics-server=true in "no-preload-976043"
	I0127 11:47:10.322468  397538 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.644290278s)
	I0127 11:47:10.322545  397538 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:10.322564  397538 main.go:141] libmachine: (no-preload-976043) Calling .Close
	I0127 11:47:10.323749  397538 main.go:141] libmachine: (no-preload-976043) DBG | Closing plugin on server side
	I0127 11:47:10.323766  397538 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:10.323841  397538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:10.323868  397538 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:10.323877  397538 main.go:141] libmachine: (no-preload-976043) Calling .Close
	I0127 11:47:10.324209  397538 main.go:141] libmachine: (no-preload-976043) DBG | Closing plugin on server side
	I0127 11:47:10.324260  397538 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:10.324276  397538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:10.326443  397538 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-976043 addons enable metrics-server
	
	I0127 11:47:10.327576  397538 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 11:47:10.328699  397538 addons.go:514] duration metric: took 3.404742641s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 11:47:11.408591  397538 pod_ready.go:103] pod "etcd-no-preload-976043" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:12.412726  397538 pod_ready.go:93] pod "etcd-no-preload-976043" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:12.412747  397538 pod_ready.go:82] duration metric: took 3.009921497s for pod "etcd-no-preload-976043" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:12.412757  397538 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-976043" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:12.419108  397538 pod_ready.go:93] pod "kube-apiserver-no-preload-976043" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:12.419131  397538 pod_ready.go:82] duration metric: took 6.362026ms for pod "kube-apiserver-no-preload-976043" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:12.419140  397538 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-976043" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:14.425681  397538 pod_ready.go:103] pod "kube-controller-manager-no-preload-976043" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:14.924760  397538 pod_ready.go:93] pod "kube-controller-manager-no-preload-976043" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:14.924789  397538 pod_ready.go:82] duration metric: took 2.505641174s for pod "kube-controller-manager-no-preload-976043" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:14.924804  397538 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-44m77" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:14.930236  397538 pod_ready.go:93] pod "kube-proxy-44m77" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:14.930255  397538 pod_ready.go:82] duration metric: took 5.444724ms for pod "kube-proxy-44m77" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:14.930264  397538 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-976043" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:14.934058  397538 pod_ready.go:93] pod "kube-scheduler-no-preload-976043" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:14.934073  397538 pod_ready.go:82] duration metric: took 3.802556ms for pod "kube-scheduler-no-preload-976043" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:14.934081  397538 pod_ready.go:39] duration metric: took 7.670861335s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:47:14.934100  397538 api_server.go:52] waiting for apiserver process to appear ...
	I0127 11:47:14.934154  397538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:14.951237  397538 api_server.go:72] duration metric: took 8.02734538s to wait for apiserver process to appear ...
	I0127 11:47:14.951258  397538 api_server.go:88] waiting for apiserver healthz status ...
	I0127 11:47:14.951276  397538 api_server.go:253] Checking apiserver healthz at https://192.168.72.171:8443/healthz ...
	I0127 11:47:14.958111  397538 api_server.go:279] https://192.168.72.171:8443/healthz returned 200:
	ok
	I0127 11:47:14.959538  397538 api_server.go:141] control plane version: v1.32.1
	I0127 11:47:14.959563  397538 api_server.go:131] duration metric: took 8.296106ms to wait for apiserver health ...
	I0127 11:47:14.959572  397538 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 11:47:14.967006  397538 system_pods.go:59] 9 kube-system pods found
	I0127 11:47:14.967038  397538 system_pods.go:61] "coredns-668d6bf9bc-5cktj" [def28b6b-a9fa-4385-a844-1f827384e6cd] Running
	I0127 11:47:14.967046  397538 system_pods.go:61] "coredns-668d6bf9bc-kjqjk" [14e1705d-7ee7-407f-a266-3c17da987f44] Running
	I0127 11:47:14.967052  397538 system_pods.go:61] "etcd-no-preload-976043" [4ac8056f-a0f1-4004-9714-274d6bb1c966] Running
	I0127 11:47:14.967059  397538 system_pods.go:61] "kube-apiserver-no-preload-976043" [ebf8e215-aa94-48b0-9951-c708fbe949f2] Running
	I0127 11:47:14.967064  397538 system_pods.go:61] "kube-controller-manager-no-preload-976043" [cec6a288-312c-44f5-917a-2a2af911f261] Running
	I0127 11:47:14.967070  397538 system_pods.go:61] "kube-proxy-44m77" [43e9e383-ae16-4265-9e7e-199b1adb4ac2] Running
	I0127 11:47:14.967079  397538 system_pods.go:61] "kube-scheduler-no-preload-976043" [61f17854-a314-46ff-a7ab-6b2fca507dc6] Running
	I0127 11:47:14.967089  397538 system_pods.go:61] "metrics-server-f79f97bbb-cxprr" [fcf4fd1c-5cc8-43ab-a46a-32c4f5559168] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 11:47:14.967105  397538 system_pods.go:61] "storage-provisioner" [8cc9c314-b668-4b0d-8d54-53a058019e73] Running
	I0127 11:47:14.967124  397538 system_pods.go:74] duration metric: took 7.544376ms to wait for pod list to return data ...
	I0127 11:47:14.967135  397538 default_sa.go:34] waiting for default service account to be created ...
	I0127 11:47:14.969819  397538 default_sa.go:45] found service account: "default"
	I0127 11:47:14.969846  397538 default_sa.go:55] duration metric: took 2.703478ms for default service account to be created ...
	I0127 11:47:14.969856  397538 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 11:47:15.077668  397538 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p no-preload-976043 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-976043 -n no-preload-976043
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-976043 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-976043 logs -n 25: (1.231231315s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-230154 sudo iptables                       | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|         | -t nat -L -n -v                                      |               |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo                                | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|         | systemctl status kubelet --all                       |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo                                | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|         | systemctl cat kubelet                                |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo                                | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|         | journalctl -xeu kubelet --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo cat                            | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo cat                            | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo                                | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC |                     |
	|         | systemctl status docker --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo                                | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|         | systemctl cat docker                                 |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo cat                            | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo docker                         | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo                                | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC |                     |
	|         | systemctl status cri-docker                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo                                | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|         | systemctl cat cri-docker                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo cat                            | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo cat                            | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo                                | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo                                | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|         | systemctl status containerd                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo                                | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|         | systemctl cat containerd                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo cat                            | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo cat                            | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo                                | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|         | containerd config dump                               |               |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo                                | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC |                     |
	|         | systemctl status crio --all                          |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo                                | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|         | systemctl cat crio --no-pager                        |               |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo find                           | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo crio                           | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p bridge-230154                                     | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 11:51:47
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 11:51:47.607978  410030 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:51:47.608091  410030 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:51:47.608100  410030 out.go:358] Setting ErrFile to fd 2...
	I0127 11:51:47.608109  410030 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:51:47.608278  410030 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-348858/.minikube/bin
	I0127 11:51:47.608812  410030 out.go:352] Setting JSON to false
	I0127 11:51:47.609953  410030 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9253,"bootTime":1737969455,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 11:51:47.610057  410030 start.go:139] virtualization: kvm guest
	I0127 11:51:47.611895  410030 out.go:177] * [bridge-230154] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 11:51:47.613441  410030 notify.go:220] Checking for updates...
	I0127 11:51:47.613479  410030 out.go:177]   - MINIKUBE_LOCATION=20319
	I0127 11:51:47.614719  410030 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:51:47.615971  410030 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20319-348858/kubeconfig
	I0127 11:51:47.617111  410030 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-348858/.minikube
	I0127 11:51:47.618157  410030 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 11:51:47.619361  410030 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 11:51:47.620941  410030 config.go:182] Loaded profile config "default-k8s-diff-port-259716": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 11:51:47.621061  410030 config.go:182] Loaded profile config "flannel-230154": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 11:51:47.621206  410030 config.go:182] Loaded profile config "no-preload-976043": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 11:51:47.621328  410030 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:51:47.658431  410030 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 11:51:47.659436  410030 start.go:297] selected driver: kvm2
	I0127 11:51:47.659452  410030 start.go:901] validating driver "kvm2" against <nil>
	I0127 11:51:47.659462  410030 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 11:51:47.660244  410030 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:51:47.660346  410030 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20319-348858/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 11:51:47.676075  410030 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 11:51:47.676119  410030 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 11:51:47.676407  410030 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 11:51:47.676445  410030 cni.go:84] Creating CNI manager for "bridge"
	I0127 11:51:47.676456  410030 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 11:51:47.676521  410030 start.go:340] cluster config:
	{Name:bridge-230154 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-230154 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: Net
workPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:51:47.676642  410030 iso.go:125] acquiring lock: {Name:mk6cdd2a3d0bfb3682c1f0c806368944f23c4809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:51:47.677997  410030 out.go:177] * Starting "bridge-230154" primary control-plane node in "bridge-230154" cluster
	I0127 11:51:47.678894  410030 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 11:51:47.678924  410030 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20319-348858/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
	I0127 11:51:47.678936  410030 cache.go:56] Caching tarball of preloaded images
	I0127 11:51:47.679024  410030 preload.go:172] Found /home/jenkins/minikube-integration/20319-348858/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 11:51:47.679037  410030 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
	I0127 11:51:47.679160  410030 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/config.json ...
	I0127 11:51:47.679185  410030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/config.json: {Name:mk2b6cd63816fa28cdffe5707c10ed7a16feb9de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:51:47.679337  410030 start.go:360] acquireMachinesLock for bridge-230154: {Name:mk69dba1a41baeb0794a28159a5cef220370e224 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 11:51:47.679375  410030 start.go:364] duration metric: took 23.748µs to acquireMachinesLock for "bridge-230154"
	I0127 11:51:47.679398  410030 start.go:93] Provisioning new machine with config: &{Name:bridge-230154 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-230154 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 11:51:47.679474  410030 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 11:51:46.323131  408290 pod_ready.go:103] pod "coredns-668d6bf9bc-cxhgb" in "kube-system" namespace has status "Ready":"False"
	I0127 11:51:48.324596  408290 pod_ready.go:103] pod "coredns-668d6bf9bc-cxhgb" in "kube-system" namespace has status "Ready":"False"
	I0127 11:51:47.680780  410030 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0127 11:51:47.680920  410030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:51:47.680961  410030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:51:47.695019  410030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34083
	I0127 11:51:47.695469  410030 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:51:47.696023  410030 main.go:141] libmachine: Using API Version  1
	I0127 11:51:47.696045  410030 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:51:47.696373  410030 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:51:47.696603  410030 main.go:141] libmachine: (bridge-230154) Calling .GetMachineName
	I0127 11:51:47.696816  410030 main.go:141] libmachine: (bridge-230154) Calling .DriverName
	I0127 11:51:47.696969  410030 start.go:159] libmachine.API.Create for "bridge-230154" (driver="kvm2")
	I0127 11:51:47.696999  410030 client.go:168] LocalClient.Create starting
	I0127 11:51:47.697034  410030 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca.pem
	I0127 11:51:47.697071  410030 main.go:141] libmachine: Decoding PEM data...
	I0127 11:51:47.697092  410030 main.go:141] libmachine: Parsing certificate...
	I0127 11:51:47.697163  410030 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20319-348858/.minikube/certs/cert.pem
	I0127 11:51:47.697192  410030 main.go:141] libmachine: Decoding PEM data...
	I0127 11:51:47.697220  410030 main.go:141] libmachine: Parsing certificate...
	I0127 11:51:47.697248  410030 main.go:141] libmachine: Running pre-create checks...
	I0127 11:51:47.697262  410030 main.go:141] libmachine: (bridge-230154) Calling .PreCreateCheck
	I0127 11:51:47.697637  410030 main.go:141] libmachine: (bridge-230154) Calling .GetConfigRaw
	I0127 11:51:47.698098  410030 main.go:141] libmachine: Creating machine...
	I0127 11:51:47.698113  410030 main.go:141] libmachine: (bridge-230154) Calling .Create
	I0127 11:51:47.698255  410030 main.go:141] libmachine: (bridge-230154) creating KVM machine...
	I0127 11:51:47.698270  410030 main.go:141] libmachine: (bridge-230154) creating network...
	I0127 11:51:47.699710  410030 main.go:141] libmachine: (bridge-230154) DBG | found existing default KVM network
	I0127 11:51:47.701093  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:47.700951  410053 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:a9:bc:42} reservation:<nil>}
	I0127 11:51:47.702050  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:47.701955  410053 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:50:a8:75} reservation:<nil>}
	I0127 11:51:47.703137  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:47.703062  410053 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000287220}
	I0127 11:51:47.703226  410030 main.go:141] libmachine: (bridge-230154) DBG | created network xml: 
	I0127 11:51:47.703248  410030 main.go:141] libmachine: (bridge-230154) DBG | <network>
	I0127 11:51:47.703258  410030 main.go:141] libmachine: (bridge-230154) DBG |   <name>mk-bridge-230154</name>
	I0127 11:51:47.703285  410030 main.go:141] libmachine: (bridge-230154) DBG |   <dns enable='no'/>
	I0127 11:51:47.703298  410030 main.go:141] libmachine: (bridge-230154) DBG |   
	I0127 11:51:47.703306  410030 main.go:141] libmachine: (bridge-230154) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0127 11:51:47.703321  410030 main.go:141] libmachine: (bridge-230154) DBG |     <dhcp>
	I0127 11:51:47.703334  410030 main.go:141] libmachine: (bridge-230154) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0127 11:51:47.703345  410030 main.go:141] libmachine: (bridge-230154) DBG |     </dhcp>
	I0127 11:51:47.703361  410030 main.go:141] libmachine: (bridge-230154) DBG |   </ip>
	I0127 11:51:47.703384  410030 main.go:141] libmachine: (bridge-230154) DBG |   
	I0127 11:51:47.703400  410030 main.go:141] libmachine: (bridge-230154) DBG | </network>
	I0127 11:51:47.703410  410030 main.go:141] libmachine: (bridge-230154) DBG | 
	I0127 11:51:47.707961  410030 main.go:141] libmachine: (bridge-230154) DBG | trying to create private KVM network mk-bridge-230154 192.168.61.0/24...
	I0127 11:51:47.780019  410030 main.go:141] libmachine: (bridge-230154) DBG | private KVM network mk-bridge-230154 192.168.61.0/24 created
	I0127 11:51:47.780050  410030 main.go:141] libmachine: (bridge-230154) setting up store path in /home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154 ...
	I0127 11:51:47.780064  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:47.779969  410053 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20319-348858/.minikube
	I0127 11:51:47.780075  410030 main.go:141] libmachine: (bridge-230154) building disk image from file:///home/jenkins/minikube-integration/20319-348858/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 11:51:47.780095  410030 main.go:141] libmachine: (bridge-230154) Downloading /home/jenkins/minikube-integration/20319-348858/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20319-348858/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 11:51:48.077713  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:48.077516  410053 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/id_rsa...
	I0127 11:51:48.209215  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:48.209093  410053 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/bridge-230154.rawdisk...
	I0127 11:51:48.209256  410030 main.go:141] libmachine: (bridge-230154) DBG | Writing magic tar header
	I0127 11:51:48.209272  410030 main.go:141] libmachine: (bridge-230154) DBG | Writing SSH key tar header
	I0127 11:51:48.209286  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:48.209206  410053 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154 ...
	I0127 11:51:48.209303  410030 main.go:141] libmachine: (bridge-230154) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154
	I0127 11:51:48.209343  410030 main.go:141] libmachine: (bridge-230154) setting executable bit set on /home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154 (perms=drwx------)
	I0127 11:51:48.209355  410030 main.go:141] libmachine: (bridge-230154) setting executable bit set on /home/jenkins/minikube-integration/20319-348858/.minikube/machines (perms=drwxr-xr-x)
	I0127 11:51:48.209368  410030 main.go:141] libmachine: (bridge-230154) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20319-348858/.minikube/machines
	I0127 11:51:48.209389  410030 main.go:141] libmachine: (bridge-230154) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20319-348858/.minikube
	I0127 11:51:48.209411  410030 main.go:141] libmachine: (bridge-230154) setting executable bit set on /home/jenkins/minikube-integration/20319-348858/.minikube (perms=drwxr-xr-x)
	I0127 11:51:48.209424  410030 main.go:141] libmachine: (bridge-230154) setting executable bit set on /home/jenkins/minikube-integration/20319-348858 (perms=drwxrwxr-x)
	I0127 11:51:48.209432  410030 main.go:141] libmachine: (bridge-230154) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 11:51:48.209444  410030 main.go:141] libmachine: (bridge-230154) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 11:51:48.209455  410030 main.go:141] libmachine: (bridge-230154) creating domain...
	I0127 11:51:48.209468  410030 main.go:141] libmachine: (bridge-230154) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20319-348858
	I0127 11:51:48.209481  410030 main.go:141] libmachine: (bridge-230154) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 11:51:48.209495  410030 main.go:141] libmachine: (bridge-230154) DBG | checking permissions on dir: /home/jenkins
	I0127 11:51:48.209503  410030 main.go:141] libmachine: (bridge-230154) DBG | checking permissions on dir: /home
	I0127 11:51:48.209510  410030 main.go:141] libmachine: (bridge-230154) DBG | skipping /home - not owner
	I0127 11:51:48.210458  410030 main.go:141] libmachine: (bridge-230154) define libvirt domain using xml: 
	I0127 11:51:48.210486  410030 main.go:141] libmachine: (bridge-230154) <domain type='kvm'>
	I0127 11:51:48.210494  410030 main.go:141] libmachine: (bridge-230154)   <name>bridge-230154</name>
	I0127 11:51:48.210500  410030 main.go:141] libmachine: (bridge-230154)   <memory unit='MiB'>3072</memory>
	I0127 11:51:48.210504  410030 main.go:141] libmachine: (bridge-230154)   <vcpu>2</vcpu>
	I0127 11:51:48.210509  410030 main.go:141] libmachine: (bridge-230154)   <features>
	I0127 11:51:48.210519  410030 main.go:141] libmachine: (bridge-230154)     <acpi/>
	I0127 11:51:48.210526  410030 main.go:141] libmachine: (bridge-230154)     <apic/>
	I0127 11:51:48.210531  410030 main.go:141] libmachine: (bridge-230154)     <pae/>
	I0127 11:51:48.210535  410030 main.go:141] libmachine: (bridge-230154)     
	I0127 11:51:48.210542  410030 main.go:141] libmachine: (bridge-230154)   </features>
	I0127 11:51:48.210549  410030 main.go:141] libmachine: (bridge-230154)   <cpu mode='host-passthrough'>
	I0127 11:51:48.210554  410030 main.go:141] libmachine: (bridge-230154)   
	I0127 11:51:48.210560  410030 main.go:141] libmachine: (bridge-230154)   </cpu>
	I0127 11:51:48.210573  410030 main.go:141] libmachine: (bridge-230154)   <os>
	I0127 11:51:48.210585  410030 main.go:141] libmachine: (bridge-230154)     <type>hvm</type>
	I0127 11:51:48.210590  410030 main.go:141] libmachine: (bridge-230154)     <boot dev='cdrom'/>
	I0127 11:51:48.210595  410030 main.go:141] libmachine: (bridge-230154)     <boot dev='hd'/>
	I0127 11:51:48.210601  410030 main.go:141] libmachine: (bridge-230154)     <bootmenu enable='no'/>
	I0127 11:51:48.210607  410030 main.go:141] libmachine: (bridge-230154)   </os>
	I0127 11:51:48.210612  410030 main.go:141] libmachine: (bridge-230154)   <devices>
	I0127 11:51:48.210617  410030 main.go:141] libmachine: (bridge-230154)     <disk type='file' device='cdrom'>
	I0127 11:51:48.210627  410030 main.go:141] libmachine: (bridge-230154)       <source file='/home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/boot2docker.iso'/>
	I0127 11:51:48.210631  410030 main.go:141] libmachine: (bridge-230154)       <target dev='hdc' bus='scsi'/>
	I0127 11:51:48.210639  410030 main.go:141] libmachine: (bridge-230154)       <readonly/>
	I0127 11:51:48.210643  410030 main.go:141] libmachine: (bridge-230154)     </disk>
	I0127 11:51:48.210666  410030 main.go:141] libmachine: (bridge-230154)     <disk type='file' device='disk'>
	I0127 11:51:48.210688  410030 main.go:141] libmachine: (bridge-230154)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 11:51:48.210711  410030 main.go:141] libmachine: (bridge-230154)       <source file='/home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/bridge-230154.rawdisk'/>
	I0127 11:51:48.210732  410030 main.go:141] libmachine: (bridge-230154)       <target dev='hda' bus='virtio'/>
	I0127 11:51:48.210743  410030 main.go:141] libmachine: (bridge-230154)     </disk>
	I0127 11:51:48.210753  410030 main.go:141] libmachine: (bridge-230154)     <interface type='network'>
	I0127 11:51:48.210760  410030 main.go:141] libmachine: (bridge-230154)       <source network='mk-bridge-230154'/>
	I0127 11:51:48.210767  410030 main.go:141] libmachine: (bridge-230154)       <model type='virtio'/>
	I0127 11:51:48.210780  410030 main.go:141] libmachine: (bridge-230154)     </interface>
	I0127 11:51:48.210787  410030 main.go:141] libmachine: (bridge-230154)     <interface type='network'>
	I0127 11:51:48.210792  410030 main.go:141] libmachine: (bridge-230154)       <source network='default'/>
	I0127 11:51:48.210798  410030 main.go:141] libmachine: (bridge-230154)       <model type='virtio'/>
	I0127 11:51:48.210808  410030 main.go:141] libmachine: (bridge-230154)     </interface>
	I0127 11:51:48.210825  410030 main.go:141] libmachine: (bridge-230154)     <serial type='pty'>
	I0127 11:51:48.210834  410030 main.go:141] libmachine: (bridge-230154)       <target port='0'/>
	I0127 11:51:48.210838  410030 main.go:141] libmachine: (bridge-230154)     </serial>
	I0127 11:51:48.210847  410030 main.go:141] libmachine: (bridge-230154)     <console type='pty'>
	I0127 11:51:48.210858  410030 main.go:141] libmachine: (bridge-230154)       <target type='serial' port='0'/>
	I0127 11:51:48.210867  410030 main.go:141] libmachine: (bridge-230154)     </console>
	I0127 11:51:48.210878  410030 main.go:141] libmachine: (bridge-230154)     <rng model='virtio'>
	I0127 11:51:48.210890  410030 main.go:141] libmachine: (bridge-230154)       <backend model='random'>/dev/random</backend>
	I0127 11:51:48.210898  410030 main.go:141] libmachine: (bridge-230154)     </rng>
	I0127 11:51:48.210903  410030 main.go:141] libmachine: (bridge-230154)     
	I0127 11:51:48.210909  410030 main.go:141] libmachine: (bridge-230154)     
	I0127 11:51:48.210913  410030 main.go:141] libmachine: (bridge-230154)   </devices>
	I0127 11:51:48.210918  410030 main.go:141] libmachine: (bridge-230154) </domain>
	I0127 11:51:48.210926  410030 main.go:141] libmachine: (bridge-230154) 
	I0127 11:51:48.214625  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:37:b6:92 in network default
	I0127 11:51:48.215133  410030 main.go:141] libmachine: (bridge-230154) starting domain...
	I0127 11:51:48.215157  410030 main.go:141] libmachine: (bridge-230154) ensuring networks are active...
	I0127 11:51:48.215168  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:51:48.215860  410030 main.go:141] libmachine: (bridge-230154) Ensuring network default is active
	I0127 11:51:48.216193  410030 main.go:141] libmachine: (bridge-230154) Ensuring network mk-bridge-230154 is active
	I0127 11:51:48.216783  410030 main.go:141] libmachine: (bridge-230154) getting domain XML...
	I0127 11:51:48.217458  410030 main.go:141] libmachine: (bridge-230154) creating domain...
	I0127 11:51:48.569774  410030 main.go:141] libmachine: (bridge-230154) waiting for IP...
	I0127 11:51:48.570778  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:51:48.571317  410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
	I0127 11:51:48.571362  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:48.571309  410053 retry.go:31] will retry after 222.051521ms: waiting for domain to come up
	I0127 11:51:48.794921  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:51:48.795488  410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
	I0127 11:51:48.795532  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:48.795451  410053 retry.go:31] will retry after 300.550406ms: waiting for domain to come up
	I0127 11:51:49.098085  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:51:49.098673  410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
	I0127 11:51:49.098705  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:49.098646  410053 retry.go:31] will retry after 351.204659ms: waiting for domain to come up
	I0127 11:51:49.450989  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:51:49.451523  410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
	I0127 11:51:49.451547  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:49.451503  410053 retry.go:31] will retry after 551.090722ms: waiting for domain to come up
	I0127 11:51:50.003672  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:51:50.004175  410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
	I0127 11:51:50.004220  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:50.004153  410053 retry.go:31] will retry after 550.280324ms: waiting for domain to come up
	I0127 11:51:50.555950  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:51:50.556457  410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
	I0127 11:51:50.556489  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:50.556430  410053 retry.go:31] will retry after 583.250306ms: waiting for domain to come up
	I0127 11:51:51.140978  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:51:51.141558  410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
	I0127 11:51:51.141627  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:51.141533  410053 retry.go:31] will retry after 1.176790151s: waiting for domain to come up
	I0127 11:51:52.320049  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:51:52.320729  410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
	I0127 11:51:52.320797  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:52.320689  410053 retry.go:31] will retry after 1.176590374s: waiting for domain to come up
	I0127 11:51:50.326882  408290 pod_ready.go:103] pod "coredns-668d6bf9bc-cxhgb" in "kube-system" namespace has status "Ready":"False"
	I0127 11:51:52.823007  408290 pod_ready.go:103] pod "coredns-668d6bf9bc-cxhgb" in "kube-system" namespace has status "Ready":"False"
	I0127 11:51:53.498996  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:51:53.499617  410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
	I0127 11:51:53.499644  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:53.499590  410053 retry.go:31] will retry after 1.435449708s: waiting for domain to come up
	I0127 11:51:54.937088  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:51:54.937656  410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
	I0127 11:51:54.937687  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:54.937628  410053 retry.go:31] will retry after 1.670320015s: waiting for domain to come up
	I0127 11:51:56.609490  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:51:56.610076  410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
	I0127 11:51:56.610106  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:56.610030  410053 retry.go:31] will retry after 2.430005713s: waiting for domain to come up
	I0127 11:51:55.322705  408290 pod_ready.go:103] pod "coredns-668d6bf9bc-cxhgb" in "kube-system" namespace has status "Ready":"False"
	I0127 11:51:57.331001  408290 pod_ready.go:103] pod "coredns-668d6bf9bc-cxhgb" in "kube-system" namespace has status "Ready":"False"
	I0127 11:51:59.822867  408290 pod_ready.go:93] pod "coredns-668d6bf9bc-cxhgb" in "kube-system" namespace has status "Ready":"True"
	I0127 11:51:59.822893  408290 pod_ready.go:82] duration metric: took 18.006590764s for pod "coredns-668d6bf9bc-cxhgb" in "kube-system" namespace to be "Ready" ...
	I0127 11:51:59.822903  408290 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-x26ng" in "kube-system" namespace to be "Ready" ...
	I0127 11:51:59.827408  408290 pod_ready.go:93] pod "coredns-668d6bf9bc-x26ng" in "kube-system" namespace has status "Ready":"True"
	I0127 11:51:59.827431  408290 pod_ready.go:82] duration metric: took 4.521822ms for pod "coredns-668d6bf9bc-x26ng" in "kube-system" namespace to be "Ready" ...
	I0127 11:51:59.827439  408290 pod_ready.go:79] waiting up to 15m0s for pod "etcd-flannel-230154" in "kube-system" namespace to be "Ready" ...
	I0127 11:51:59.831731  408290 pod_ready.go:93] pod "etcd-flannel-230154" in "kube-system" namespace has status "Ready":"True"
	I0127 11:51:59.831754  408290 pod_ready.go:82] duration metric: took 4.307302ms for pod "etcd-flannel-230154" in "kube-system" namespace to be "Ready" ...
	I0127 11:51:59.831766  408290 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-flannel-230154" in "kube-system" namespace to be "Ready" ...
	I0127 11:51:59.836455  408290 pod_ready.go:93] pod "kube-apiserver-flannel-230154" in "kube-system" namespace has status "Ready":"True"
	I0127 11:51:59.836476  408290 pod_ready.go:82] duration metric: took 4.701033ms for pod "kube-apiserver-flannel-230154" in "kube-system" namespace to be "Ready" ...
	I0127 11:51:59.836485  408290 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-flannel-230154" in "kube-system" namespace to be "Ready" ...
	I0127 11:51:59.841564  408290 pod_ready.go:93] pod "kube-controller-manager-flannel-230154" in "kube-system" namespace has status "Ready":"True"
	I0127 11:51:59.841607  408290 pod_ready.go:82] duration metric: took 5.114623ms for pod "kube-controller-manager-flannel-230154" in "kube-system" namespace to be "Ready" ...
	I0127 11:51:59.841619  408290 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-fwvhb" in "kube-system" namespace to be "Ready" ...
	I0127 11:52:00.221093  408290 pod_ready.go:93] pod "kube-proxy-fwvhb" in "kube-system" namespace has status "Ready":"True"
	I0127 11:52:00.221117  408290 pod_ready.go:82] duration metric: took 379.489464ms for pod "kube-proxy-fwvhb" in "kube-system" namespace to be "Ready" ...
	I0127 11:52:00.221127  408290 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-flannel-230154" in "kube-system" namespace to be "Ready" ...
	I0127 11:51:59.041589  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:51:59.042126  410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
	I0127 11:51:59.042157  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:59.042094  410053 retry.go:31] will retry after 2.320988246s: waiting for domain to come up
	I0127 11:52:01.364475  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:01.365092  410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
	I0127 11:52:01.365148  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:52:01.365068  410053 retry.go:31] will retry after 4.110080679s: waiting for domain to come up
	I0127 11:52:00.620378  408290 pod_ready.go:93] pod "kube-scheduler-flannel-230154" in "kube-system" namespace has status "Ready":"True"
	I0127 11:52:00.620412  408290 pod_ready.go:82] duration metric: took 399.276857ms for pod "kube-scheduler-flannel-230154" in "kube-system" namespace to be "Ready" ...
	I0127 11:52:00.620423  408290 pod_ready.go:39] duration metric: took 18.811740813s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:52:00.620442  408290 api_server.go:52] waiting for apiserver process to appear ...
	I0127 11:52:00.620509  408290 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:52:00.636203  408290 api_server.go:72] duration metric: took 26.524075024s to wait for apiserver process to appear ...
	I0127 11:52:00.636225  408290 api_server.go:88] waiting for apiserver healthz status ...
	I0127 11:52:00.636241  408290 api_server.go:253] Checking apiserver healthz at https://192.168.50.249:8443/healthz ...
	I0127 11:52:00.640488  408290 api_server.go:279] https://192.168.50.249:8443/healthz returned 200:
	ok
	I0127 11:52:00.641304  408290 api_server.go:141] control plane version: v1.32.1
	I0127 11:52:00.641328  408290 api_server.go:131] duration metric: took 5.095135ms to wait for apiserver health ...
	I0127 11:52:00.641338  408290 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 11:52:00.823404  408290 system_pods.go:59] 8 kube-system pods found
	I0127 11:52:00.823440  408290 system_pods.go:61] "coredns-668d6bf9bc-cxhgb" [1b5c455f-cd3e-4049-ad66-0b5ac83e0cfc] Running
	I0127 11:52:00.823447  408290 system_pods.go:61] "coredns-668d6bf9bc-x26ng" [faddde6c-95bb-43ed-8312-9cb6d1381b76] Running
	I0127 11:52:00.823451  408290 system_pods.go:61] "etcd-flannel-230154" [04cfa9e0-f3d2-4147-a565-73d9a56314be] Running
	I0127 11:52:00.823457  408290 system_pods.go:61] "kube-apiserver-flannel-230154" [b7e45b11-41e6-4471-b69f-ebcfa9fe0b11] Running
	I0127 11:52:00.823460  408290 system_pods.go:61] "kube-controller-manager-flannel-230154" [db9c61ca-4433-474f-b896-bf75b5586aa8] Running
	I0127 11:52:00.823464  408290 system_pods.go:61] "kube-proxy-fwvhb" [c9df58ca-9fda-4b0d-83d3-b0d5771a2b8d] Running
	I0127 11:52:00.823468  408290 system_pods.go:61] "kube-scheduler-flannel-230154" [ef963048-9064-4a1b-8c7c-0b560ac1073e] Running
	I0127 11:52:00.823473  408290 system_pods.go:61] "storage-provisioner" [1d37e577-26fc-4920-addd-4c2b9ea83d4f] Running
	I0127 11:52:00.823480  408290 system_pods.go:74] duration metric: took 182.135829ms to wait for pod list to return data ...
	I0127 11:52:00.823492  408290 default_sa.go:34] waiting for default service account to be created ...
	I0127 11:52:01.019648  408290 default_sa.go:45] found service account: "default"
	I0127 11:52:01.019672  408290 default_sa.go:55] duration metric: took 196.17422ms for default service account to be created ...
	I0127 11:52:01.019680  408290 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 11:52:01.222213  408290 system_pods.go:87] 8 kube-system pods found
	I0127 11:52:05.478491  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:05.479050  410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
	I0127 11:52:05.479075  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:52:05.479016  410053 retry.go:31] will retry after 3.983085371s: waiting for domain to come up
	I0127 11:52:09.463887  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:09.464547  410030 main.go:141] libmachine: (bridge-230154) found domain IP: 192.168.61.114
	I0127 11:52:09.464572  410030 main.go:141] libmachine: (bridge-230154) reserving static IP address...
	I0127 11:52:09.464581  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has current primary IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:09.464980  410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find host DHCP lease matching {name: "bridge-230154", mac: "52:54:00:79:3a:f7", ip: "192.168.61.114"} in network mk-bridge-230154
	I0127 11:52:09.541183  410030 main.go:141] libmachine: (bridge-230154) reserved static IP address 192.168.61.114 for domain bridge-230154
	I0127 11:52:09.541215  410030 main.go:141] libmachine: (bridge-230154) waiting for SSH...
	I0127 11:52:09.541226  410030 main.go:141] libmachine: (bridge-230154) DBG | Getting to WaitForSSH function...
	I0127 11:52:09.544735  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:09.545125  410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154
	I0127 11:52:09.545156  410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find defined IP address of network mk-bridge-230154 interface with MAC address 52:54:00:79:3a:f7
	I0127 11:52:09.545335  410030 main.go:141] libmachine: (bridge-230154) DBG | Using SSH client type: external
	I0127 11:52:09.545351  410030 main.go:141] libmachine: (bridge-230154) DBG | Using SSH private key: /home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/id_rsa (-rw-------)
	I0127 11:52:09.545396  410030 main.go:141] libmachine: (bridge-230154) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 11:52:09.545409  410030 main.go:141] libmachine: (bridge-230154) DBG | About to run SSH command:
	I0127 11:52:09.545431  410030 main.go:141] libmachine: (bridge-230154) DBG | exit 0
	I0127 11:52:09.549092  410030 main.go:141] libmachine: (bridge-230154) DBG | SSH cmd err, output: exit status 255: 
	I0127 11:52:09.549118  410030 main.go:141] libmachine: (bridge-230154) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0127 11:52:09.549128  410030 main.go:141] libmachine: (bridge-230154) DBG | command : exit 0
	I0127 11:52:09.549141  410030 main.go:141] libmachine: (bridge-230154) DBG | err     : exit status 255
	I0127 11:52:09.549152  410030 main.go:141] libmachine: (bridge-230154) DBG | output  : 
	I0127 11:52:12.550382  410030 main.go:141] libmachine: (bridge-230154) DBG | Getting to WaitForSSH function...
	I0127 11:52:12.552791  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:12.553322  410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
	I0127 11:52:12.553351  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:12.553432  410030 main.go:141] libmachine: (bridge-230154) DBG | Using SSH client type: external
	I0127 11:52:12.553481  410030 main.go:141] libmachine: (bridge-230154) DBG | Using SSH private key: /home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/id_rsa (-rw-------)
	I0127 11:52:12.553525  410030 main.go:141] libmachine: (bridge-230154) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 11:52:12.553539  410030 main.go:141] libmachine: (bridge-230154) DBG | About to run SSH command:
	I0127 11:52:12.553563  410030 main.go:141] libmachine: (bridge-230154) DBG | exit 0
	I0127 11:52:12.681782  410030 main.go:141] libmachine: (bridge-230154) DBG | SSH cmd err, output: <nil>: 
	I0127 11:52:12.682047  410030 main.go:141] libmachine: (bridge-230154) KVM machine creation complete
	I0127 11:52:12.682445  410030 main.go:141] libmachine: (bridge-230154) Calling .GetConfigRaw
	I0127 11:52:12.682967  410030 main.go:141] libmachine: (bridge-230154) Calling .DriverName
	I0127 11:52:12.683184  410030 main.go:141] libmachine: (bridge-230154) Calling .DriverName
	I0127 11:52:12.683394  410030 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 11:52:12.683415  410030 main.go:141] libmachine: (bridge-230154) Calling .GetState
	I0127 11:52:12.684785  410030 main.go:141] libmachine: Detecting operating system of created instance...
	I0127 11:52:12.684823  410030 main.go:141] libmachine: Waiting for SSH to be available...
	I0127 11:52:12.684832  410030 main.go:141] libmachine: Getting to WaitForSSH function...
	I0127 11:52:12.684844  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHHostname
	I0127 11:52:12.687551  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:12.687960  410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
	I0127 11:52:12.687997  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:12.688103  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHPort
	I0127 11:52:12.688306  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
	I0127 11:52:12.688464  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
	I0127 11:52:12.688609  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHUsername
	I0127 11:52:12.688818  410030 main.go:141] libmachine: Using SSH client type: native
	I0127 11:52:12.689070  410030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.114 22 <nil> <nil>}
	I0127 11:52:12.689084  410030 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0127 11:52:12.800827  410030 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 11:52:12.800849  410030 main.go:141] libmachine: Detecting the provisioner...
	I0127 11:52:12.800859  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHHostname
	I0127 11:52:12.803312  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:12.803747  410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
	I0127 11:52:12.803778  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:12.803968  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHPort
	I0127 11:52:12.804181  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
	I0127 11:52:12.804339  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
	I0127 11:52:12.804499  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHUsername
	I0127 11:52:12.804712  410030 main.go:141] libmachine: Using SSH client type: native
	I0127 11:52:12.804930  410030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.114 22 <nil> <nil>}
	I0127 11:52:12.804944  410030 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 11:52:12.922388  410030 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0127 11:52:12.922499  410030 main.go:141] libmachine: found compatible host: buildroot
	I0127 11:52:12.922517  410030 main.go:141] libmachine: Provisioning with buildroot...
	I0127 11:52:12.922528  410030 main.go:141] libmachine: (bridge-230154) Calling .GetMachineName
	I0127 11:52:12.922767  410030 buildroot.go:166] provisioning hostname "bridge-230154"
	I0127 11:52:12.922793  410030 main.go:141] libmachine: (bridge-230154) Calling .GetMachineName
	I0127 11:52:12.922988  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHHostname
	I0127 11:52:12.925557  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:12.925920  410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
	I0127 11:52:12.925951  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:12.926089  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHPort
	I0127 11:52:12.926266  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
	I0127 11:52:12.926402  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
	I0127 11:52:12.926527  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHUsername
	I0127 11:52:12.926642  410030 main.go:141] libmachine: Using SSH client type: native
	I0127 11:52:12.926867  410030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.114 22 <nil> <nil>}
	I0127 11:52:12.926884  410030 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-230154 && echo "bridge-230154" | sudo tee /etc/hostname
	I0127 11:52:13.055349  410030 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-230154
	
	I0127 11:52:13.055376  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHHostname
	I0127 11:52:13.057804  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.058160  410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
	I0127 11:52:13.058184  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.058377  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHPort
	I0127 11:52:13.058583  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
	I0127 11:52:13.058746  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
	I0127 11:52:13.058898  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHUsername
	I0127 11:52:13.059086  410030 main.go:141] libmachine: Using SSH client type: native
	I0127 11:52:13.059305  410030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.114 22 <nil> <nil>}
	I0127 11:52:13.059340  410030 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-230154' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-230154/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-230154' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 11:52:13.182533  410030 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 11:52:13.182574  410030 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20319-348858/.minikube CaCertPath:/home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20319-348858/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20319-348858/.minikube}
	I0127 11:52:13.182607  410030 buildroot.go:174] setting up certificates
	I0127 11:52:13.182618  410030 provision.go:84] configureAuth start
	I0127 11:52:13.182631  410030 main.go:141] libmachine: (bridge-230154) Calling .GetMachineName
	I0127 11:52:13.182846  410030 main.go:141] libmachine: (bridge-230154) Calling .GetIP
	I0127 11:52:13.185388  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.185727  410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
	I0127 11:52:13.185753  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.185888  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHHostname
	I0127 11:52:13.188052  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.188418  410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
	I0127 11:52:13.188451  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.188586  410030 provision.go:143] copyHostCerts
	I0127 11:52:13.188644  410030 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-348858/.minikube/ca.pem, removing ...
	I0127 11:52:13.188668  410030 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-348858/.minikube/ca.pem
	I0127 11:52:13.188770  410030 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20319-348858/.minikube/ca.pem (1082 bytes)
	I0127 11:52:13.188901  410030 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-348858/.minikube/cert.pem, removing ...
	I0127 11:52:13.188912  410030 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-348858/.minikube/cert.pem
	I0127 11:52:13.188951  410030 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20319-348858/.minikube/cert.pem (1123 bytes)
	I0127 11:52:13.189068  410030 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-348858/.minikube/key.pem, removing ...
	I0127 11:52:13.189080  410030 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-348858/.minikube/key.pem
	I0127 11:52:13.189133  410030 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20319-348858/.minikube/key.pem (1679 bytes)
	I0127 11:52:13.189206  410030 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20319-348858/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca-key.pem org=jenkins.bridge-230154 san=[127.0.0.1 192.168.61.114 bridge-230154 localhost minikube]
	I0127 11:52:13.437569  410030 provision.go:177] copyRemoteCerts
	I0127 11:52:13.437657  410030 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 11:52:13.437681  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHHostname
	I0127 11:52:13.440100  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.440463  410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
	I0127 11:52:13.440498  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.440655  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHPort
	I0127 11:52:13.440869  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
	I0127 11:52:13.441020  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHUsername
	I0127 11:52:13.441174  410030 sshutil.go:53] new ssh client: &{IP:192.168.61.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/id_rsa Username:docker}
	I0127 11:52:13.527720  410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 11:52:13.553220  410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0127 11:52:13.577811  410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0127 11:52:13.602562  410030 provision.go:87] duration metric: took 419.926949ms to configureAuth
	I0127 11:52:13.602597  410030 buildroot.go:189] setting minikube options for container-runtime
	I0127 11:52:13.602829  410030 config.go:182] Loaded profile config "bridge-230154": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 11:52:13.602905  410030 main.go:141] libmachine: Checking connection to Docker...
	I0127 11:52:13.602923  410030 main.go:141] libmachine: (bridge-230154) Calling .GetURL
	I0127 11:52:13.604054  410030 main.go:141] libmachine: (bridge-230154) DBG | using libvirt version 6000000
	I0127 11:52:13.606405  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.606734  410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
	I0127 11:52:13.606760  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.606925  410030 main.go:141] libmachine: Docker is up and running!
	I0127 11:52:13.606940  410030 main.go:141] libmachine: Reticulating splines...
	I0127 11:52:13.606947  410030 client.go:171] duration metric: took 25.909938238s to LocalClient.Create
	I0127 11:52:13.606968  410030 start.go:167] duration metric: took 25.909999682s to libmachine.API.Create "bridge-230154"
	I0127 11:52:13.606981  410030 start.go:293] postStartSetup for "bridge-230154" (driver="kvm2")
	I0127 11:52:13.606995  410030 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 11:52:13.607018  410030 main.go:141] libmachine: (bridge-230154) Calling .DriverName
	I0127 11:52:13.607273  410030 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 11:52:13.607302  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHHostname
	I0127 11:52:13.609569  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.609936  410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
	I0127 11:52:13.609966  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.610158  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHPort
	I0127 11:52:13.610355  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
	I0127 11:52:13.610531  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHUsername
	I0127 11:52:13.610640  410030 sshutil.go:53] new ssh client: &{IP:192.168.61.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/id_rsa Username:docker}
	I0127 11:52:13.697284  410030 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 11:52:13.702294  410030 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 11:52:13.702320  410030 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-348858/.minikube/addons for local assets ...
	I0127 11:52:13.702383  410030 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-348858/.minikube/files for local assets ...
	I0127 11:52:13.702495  410030 filesync.go:149] local asset: /home/jenkins/minikube-integration/20319-348858/.minikube/files/etc/ssl/certs/3562042.pem -> 3562042.pem in /etc/ssl/certs
	I0127 11:52:13.702595  410030 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 11:52:13.713272  410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/files/etc/ssl/certs/3562042.pem --> /etc/ssl/certs/3562042.pem (1708 bytes)
	I0127 11:52:13.737044  410030 start.go:296] duration metric: took 130.0485ms for postStartSetup
	I0127 11:52:13.737087  410030 main.go:141] libmachine: (bridge-230154) Calling .GetConfigRaw
	I0127 11:52:13.737687  410030 main.go:141] libmachine: (bridge-230154) Calling .GetIP
	I0127 11:52:13.740135  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.740568  410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
	I0127 11:52:13.740596  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.740857  410030 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/config.json ...
	I0127 11:52:13.741063  410030 start.go:128] duration metric: took 26.061575251s to createHost
	I0127 11:52:13.741091  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHHostname
	I0127 11:52:13.743565  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.743863  410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
	I0127 11:52:13.743892  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.744009  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHPort
	I0127 11:52:13.744178  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
	I0127 11:52:13.744308  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
	I0127 11:52:13.744464  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHUsername
	I0127 11:52:13.744612  410030 main.go:141] libmachine: Using SSH client type: native
	I0127 11:52:13.744775  410030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.114 22 <nil> <nil>}
	I0127 11:52:13.744786  410030 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 11:52:13.858058  410030 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737978733.835977728
	
	I0127 11:52:13.858081  410030 fix.go:216] guest clock: 1737978733.835977728
	I0127 11:52:13.858090  410030 fix.go:229] Guest: 2025-01-27 11:52:13.835977728 +0000 UTC Remote: 2025-01-27 11:52:13.74107788 +0000 UTC m=+26.172194908 (delta=94.899848ms)
	I0127 11:52:13.858112  410030 fix.go:200] guest clock delta is within tolerance: 94.899848ms
	I0127 11:52:13.858119  410030 start.go:83] releasing machines lock for "bridge-230154", held for 26.178731868s
	I0127 11:52:13.858143  410030 main.go:141] libmachine: (bridge-230154) Calling .DriverName
	I0127 11:52:13.858357  410030 main.go:141] libmachine: (bridge-230154) Calling .GetIP
	I0127 11:52:13.860564  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.860972  410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
	I0127 11:52:13.861005  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.861149  410030 main.go:141] libmachine: (bridge-230154) Calling .DriverName
	I0127 11:52:13.861700  410030 main.go:141] libmachine: (bridge-230154) Calling .DriverName
	I0127 11:52:13.861894  410030 main.go:141] libmachine: (bridge-230154) Calling .DriverName
	I0127 11:52:13.861978  410030 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 11:52:13.862037  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHHostname
	I0127 11:52:13.862113  410030 ssh_runner.go:195] Run: cat /version.json
	I0127 11:52:13.862141  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHHostname
	I0127 11:52:13.864536  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.864853  410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
	I0127 11:52:13.864880  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.864898  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.865008  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHPort
	I0127 11:52:13.865191  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
	I0127 11:52:13.865337  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHUsername
	I0127 11:52:13.865370  410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
	I0127 11:52:13.865394  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.865518  410030 sshutil.go:53] new ssh client: &{IP:192.168.61.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/id_rsa Username:docker}
	I0127 11:52:13.865598  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHPort
	I0127 11:52:13.865728  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
	I0127 11:52:13.865888  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHUsername
	I0127 11:52:13.866057  410030 sshutil.go:53] new ssh client: &{IP:192.168.61.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/id_rsa Username:docker}
	I0127 11:52:13.965402  410030 ssh_runner.go:195] Run: systemctl --version
	I0127 11:52:13.971806  410030 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 11:52:13.977779  410030 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 11:52:13.977840  410030 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 11:52:13.994427  410030 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 11:52:13.994450  410030 start.go:495] detecting cgroup driver to use...
	I0127 11:52:13.994511  410030 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 11:52:14.024064  410030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 11:52:14.037402  410030 docker.go:217] disabling cri-docker service (if available) ...
	I0127 11:52:14.037442  410030 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 11:52:14.051360  410030 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 11:52:14.064833  410030 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 11:52:14.189820  410030 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 11:52:14.353457  410030 docker.go:233] disabling docker service ...
	I0127 11:52:14.353523  410030 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 11:52:14.368733  410030 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 11:52:14.383491  410030 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 11:52:14.519252  410030 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 11:52:14.653505  410030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 11:52:14.667113  410030 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 11:52:14.686409  410030 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 11:52:14.698227  410030 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 11:52:14.708812  410030 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 11:52:14.708860  410030 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 11:52:14.719554  410030 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 11:52:14.729838  410030 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 11:52:14.740183  410030 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 11:52:14.750883  410030 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 11:52:14.761217  410030 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 11:52:14.771423  410030 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 11:52:14.781773  410030 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 11:52:14.793278  410030 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 11:52:14.804439  410030 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 11:52:14.804483  410030 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 11:52:14.818950  410030 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 11:52:14.829832  410030 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:52:14.959488  410030 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 11:52:14.989337  410030 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0127 11:52:14.989418  410030 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 11:52:14.994828  410030 retry.go:31] will retry after 1.345888224s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0127 11:52:16.341324  410030 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 11:52:16.347230  410030 start.go:563] Will wait 60s for crictl version
	I0127 11:52:16.347291  410030 ssh_runner.go:195] Run: which crictl
	I0127 11:52:16.351193  410030 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 11:52:16.395528  410030 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0127 11:52:16.395651  410030 ssh_runner.go:195] Run: containerd --version
	I0127 11:52:16.423238  410030 ssh_runner.go:195] Run: containerd --version
	I0127 11:52:16.449514  410030 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	I0127 11:52:16.450520  410030 main.go:141] libmachine: (bridge-230154) Calling .GetIP
	I0127 11:52:16.453118  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:16.453477  410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
	I0127 11:52:16.453507  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:16.453734  410030 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0127 11:52:16.458237  410030 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:52:16.472482  410030 kubeadm.go:883] updating cluster {Name:bridge-230154 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-230154 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.114 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 11:52:16.472594  410030 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 11:52:16.472646  410030 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:52:16.504936  410030 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 11:52:16.504987  410030 ssh_runner.go:195] Run: which lz4
	I0127 11:52:16.509417  410030 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 11:52:16.514081  410030 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 11:52:16.514116  410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (398131433 bytes)
	I0127 11:52:18.011626  410030 containerd.go:563] duration metric: took 1.502237089s to copy over tarball
	I0127 11:52:18.011722  410030 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 11:52:20.285505  410030 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.273743353s)
	I0127 11:52:20.285572  410030 containerd.go:570] duration metric: took 2.273906638s to extract the tarball
	I0127 11:52:20.285607  410030 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 11:52:20.324554  410030 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:52:20.445111  410030 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 11:52:20.473323  410030 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:52:20.503997  410030 retry.go:31] will retry after 167.428638ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-27T11:52:20Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0127 11:52:20.672333  410030 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:52:20.709952  410030 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 11:52:20.709981  410030 cache_images.go:84] Images are preloaded, skipping loading
	I0127 11:52:20.709993  410030 kubeadm.go:934] updating node { 192.168.61.114 8443 v1.32.1 containerd true true} ...
	I0127 11:52:20.710125  410030 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-230154 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:bridge-230154 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0127 11:52:20.710197  410030 ssh_runner.go:195] Run: sudo crictl info
	I0127 11:52:20.744967  410030 cni.go:84] Creating CNI manager for "bridge"
	I0127 11:52:20.744998  410030 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 11:52:20.745028  410030 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.114 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-230154 NodeName:bridge-230154 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 11:52:20.745188  410030 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "bridge-230154"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.114"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.114"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 11:52:20.745251  410030 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 11:52:20.756008  410030 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 11:52:20.756057  410030 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 11:52:20.765655  410030 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0127 11:52:20.782155  410030 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 11:52:20.798911  410030 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2309 bytes)
	I0127 11:52:20.816745  410030 ssh_runner.go:195] Run: grep 192.168.61.114	control-plane.minikube.internal$ /etc/hosts
	I0127 11:52:20.820748  410030 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.114	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:52:20.833862  410030 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:52:20.953656  410030 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:52:20.974846  410030 certs.go:68] Setting up /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154 for IP: 192.168.61.114
	I0127 11:52:20.974871  410030 certs.go:194] generating shared ca certs ...
	I0127 11:52:20.974892  410030 certs.go:226] acquiring lock for ca certs: {Name:mkd458666dacb6826c0d92f860c3c2133957f34f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:52:20.975122  410030 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20319-348858/.minikube/ca.key
	I0127 11:52:20.975196  410030 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20319-348858/.minikube/proxy-client-ca.key
	I0127 11:52:20.975212  410030 certs.go:256] generating profile certs ...
	I0127 11:52:20.975305  410030 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/client.key
	I0127 11:52:20.975335  410030 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/client.crt with IP's: []
	I0127 11:52:21.301307  410030 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/client.crt ...
	I0127 11:52:21.301335  410030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/client.crt: {Name:mk56bf4c2bbecfad8654b1b4ec642ad6fec51061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:52:21.301487  410030 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/client.key ...
	I0127 11:52:21.301498  410030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/client.key: {Name:mk552257e0fe7fe2855b6465ed9cf6fdbde292fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:52:21.301600  410030 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/apiserver.key.efd0145a
	I0127 11:52:21.301615  410030 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/apiserver.crt.efd0145a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.114]
	I0127 11:52:21.347405  410030 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/apiserver.crt.efd0145a ...
	I0127 11:52:21.347434  410030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/apiserver.crt.efd0145a: {Name:mk6a6599e29481626e185ed34dee333ec39afdfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:52:21.347596  410030 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/apiserver.key.efd0145a ...
	I0127 11:52:21.347613  410030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/apiserver.key.efd0145a: {Name:mk7efccd9616f59b687d73eb0de97063b6b07fbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:52:21.347712  410030 certs.go:381] copying /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/apiserver.crt.efd0145a -> /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/apiserver.crt
	I0127 11:52:21.347813  410030 certs.go:385] copying /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/apiserver.key.efd0145a -> /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/apiserver.key
	I0127 11:52:21.347892  410030 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/proxy-client.key
	I0127 11:52:21.347914  410030 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/proxy-client.crt with IP's: []
	I0127 11:52:21.603596  410030 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/proxy-client.crt ...
	I0127 11:52:21.603626  410030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/proxy-client.crt: {Name:mk62ae8cb0440216cba0e9b53bb75a82eea68d94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:52:21.603813  410030 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/proxy-client.key ...
	I0127 11:52:21.603851  410030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/proxy-client.key: {Name:mk874150a052e7bf16d1760bcb83588a7d7232ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:52:21.604047  410030 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/356204.pem (1338 bytes)
	W0127 11:52:21.604084  410030 certs.go:480] ignoring /home/jenkins/minikube-integration/20319-348858/.minikube/certs/356204_empty.pem, impossibly tiny 0 bytes
	I0127 11:52:21.604094  410030 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 11:52:21.604127  410030 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca.pem (1082 bytes)
	I0127 11:52:21.604150  410030 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/cert.pem (1123 bytes)
	I0127 11:52:21.604173  410030 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/key.pem (1679 bytes)
	I0127 11:52:21.604208  410030 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-348858/.minikube/files/etc/ssl/certs/3562042.pem (1708 bytes)
	I0127 11:52:21.604922  410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 11:52:21.640478  410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 11:52:21.675198  410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 11:52:21.707991  410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 11:52:21.734067  410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0127 11:52:21.758859  410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 11:52:21.785069  410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 11:52:21.811694  410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 11:52:21.839559  410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 11:52:21.864922  410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/certs/356204.pem --> /usr/share/ca-certificates/356204.pem (1338 bytes)
	I0127 11:52:21.893151  410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/files/etc/ssl/certs/3562042.pem --> /usr/share/ca-certificates/3562042.pem (1708 bytes)
	I0127 11:52:21.918761  410030 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 11:52:21.936954  410030 ssh_runner.go:195] Run: openssl version
	I0127 11:52:21.943412  410030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/356204.pem && ln -fs /usr/share/ca-certificates/356204.pem /etc/ssl/certs/356204.pem"
	I0127 11:52:21.953934  410030 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356204.pem
	I0127 11:52:21.958381  410030 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 10:40 /usr/share/ca-certificates/356204.pem
	I0127 11:52:21.958435  410030 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356204.pem
	I0127 11:52:21.964735  410030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/356204.pem /etc/ssl/certs/51391683.0"
	I0127 11:52:21.976503  410030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3562042.pem && ln -fs /usr/share/ca-certificates/3562042.pem /etc/ssl/certs/3562042.pem"
	I0127 11:52:21.987257  410030 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3562042.pem
	I0127 11:52:21.993575  410030 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 10:40 /usr/share/ca-certificates/3562042.pem
	I0127 11:52:21.993646  410030 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3562042.pem
	I0127 11:52:21.999525  410030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3562042.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 11:52:22.009959  410030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 11:52:22.021429  410030 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:52:22.026427  410030 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:52:22.026475  410030 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:52:22.032448  410030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 11:52:22.043143  410030 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 11:52:22.047488  410030 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 11:52:22.047543  410030 kubeadm.go:392] StartCluster: {Name:bridge-230154 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-230154 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.114 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:52:22.047613  410030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0127 11:52:22.047658  410030 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 11:52:22.086372  410030 cri.go:89] found id: ""
	I0127 11:52:22.086433  410030 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 11:52:22.096728  410030 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:52:22.106517  410030 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:52:22.116214  410030 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:52:22.116231  410030 kubeadm.go:157] found existing configuration files:
	
	I0127 11:52:22.116264  410030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:52:22.125344  410030 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:52:22.125413  410030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:52:22.134811  410030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:52:22.143836  410030 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:52:22.143877  410030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:52:22.153251  410030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:52:22.161993  410030 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:52:22.162078  410030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:52:22.171015  410030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:52:22.179758  410030 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:52:22.179812  410030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:52:22.189014  410030 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 11:52:22.375345  410030 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 11:52:32.209450  410030 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 11:52:32.209522  410030 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 11:52:32.209617  410030 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 11:52:32.209722  410030 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 11:52:32.209830  410030 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 11:52:32.209885  410030 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 11:52:32.211330  410030 out.go:235]   - Generating certificates and keys ...
	I0127 11:52:32.211448  410030 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 11:52:32.211535  410030 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 11:52:32.211635  410030 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 11:52:32.211700  410030 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 11:52:32.211752  410030 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 11:52:32.211795  410030 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 11:52:32.211845  410030 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 11:52:32.211948  410030 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-230154 localhost] and IPs [192.168.61.114 127.0.0.1 ::1]
	I0127 11:52:32.211995  410030 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 11:52:32.212189  410030 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-230154 localhost] and IPs [192.168.61.114 127.0.0.1 ::1]
	I0127 11:52:32.212294  410030 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 11:52:32.212377  410030 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 11:52:32.212435  410030 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 11:52:32.212524  410030 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 11:52:32.212592  410030 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 11:52:32.212643  410030 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 11:52:32.212692  410030 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 11:52:32.212798  410030 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 11:52:32.212898  410030 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 11:52:32.212993  410030 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 11:52:32.213052  410030 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 11:52:32.214270  410030 out.go:235]   - Booting up control plane ...
	I0127 11:52:32.214386  410030 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 11:52:32.214498  410030 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 11:52:32.214590  410030 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 11:52:32.214739  410030 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 11:52:32.214899  410030 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 11:52:32.214967  410030 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 11:52:32.215138  410030 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 11:52:32.215293  410030 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 11:52:32.215402  410030 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001079301s
	I0127 11:52:32.215488  410030 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 11:52:32.215548  410030 kubeadm.go:310] [api-check] The API server is healthy after 4.502067696s
	I0127 11:52:32.215682  410030 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 11:52:32.215799  410030 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 11:52:32.215885  410030 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 11:52:32.216101  410030 kubeadm.go:310] [mark-control-plane] Marking the node bridge-230154 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 11:52:32.216183  410030 kubeadm.go:310] [bootstrap-token] Using token: 3ugidl.t0qx3cfrqpz3s5rm
	I0127 11:52:32.218040  410030 out.go:235]   - Configuring RBAC rules ...
	I0127 11:52:32.218199  410030 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 11:52:32.218297  410030 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 11:52:32.218438  410030 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 11:52:32.218656  410030 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 11:52:32.218778  410030 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 11:52:32.218872  410030 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 11:52:32.219002  410030 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 11:52:32.219065  410030 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 11:52:32.219138  410030 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 11:52:32.219147  410030 kubeadm.go:310] 
	I0127 11:52:32.219229  410030 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 11:52:32.219238  410030 kubeadm.go:310] 
	I0127 11:52:32.219362  410030 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 11:52:32.219371  410030 kubeadm.go:310] 
	I0127 11:52:32.219407  410030 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 11:52:32.219511  410030 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 11:52:32.219596  410030 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 11:52:32.219609  410030 kubeadm.go:310] 
	I0127 11:52:32.219697  410030 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 11:52:32.219711  410030 kubeadm.go:310] 
	I0127 11:52:32.219782  410030 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 11:52:32.219793  410030 kubeadm.go:310] 
	I0127 11:52:32.219869  410030 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 11:52:32.219979  410030 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 11:52:32.220072  410030 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 11:52:32.220081  410030 kubeadm.go:310] 
	I0127 11:52:32.220215  410030 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 11:52:32.220347  410030 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 11:52:32.220359  410030 kubeadm.go:310] 
	I0127 11:52:32.220497  410030 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3ugidl.t0qx3cfrqpz3s5rm \
	I0127 11:52:32.220638  410030 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c769a71fa2072963699012a67c9bb4b27b6fc88b52aea51191b7b2189ca81982 \
	I0127 11:52:32.220670  410030 kubeadm.go:310] 	--control-plane 
	I0127 11:52:32.220679  410030 kubeadm.go:310] 
	I0127 11:52:32.220787  410030 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 11:52:32.220796  410030 kubeadm.go:310] 
	I0127 11:52:32.220902  410030 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3ugidl.t0qx3cfrqpz3s5rm \
	I0127 11:52:32.221064  410030 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c769a71fa2072963699012a67c9bb4b27b6fc88b52aea51191b7b2189ca81982 
	I0127 11:52:32.221079  410030 cni.go:84] Creating CNI manager for "bridge"
	I0127 11:52:32.222330  410030 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 11:52:32.223261  410030 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 11:52:32.235254  410030 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 11:52:32.261938  410030 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 11:52:32.262064  410030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:52:32.262145  410030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-230154 minikube.k8s.io/updated_at=2025_01_27T11_52_32_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=35c230aa12d4986001aef5f6e29069f3bc5493aa minikube.k8s.io/name=bridge-230154 minikube.k8s.io/primary=true
	I0127 11:52:32.280765  410030 ops.go:34] apiserver oom_adj: -16
	I0127 11:52:32.416195  410030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:52:32.916850  410030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:52:33.416903  410030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:52:33.916419  410030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:52:34.417254  410030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:52:34.916570  410030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:52:35.416622  410030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:52:35.916814  410030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:52:36.417150  410030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:52:36.512250  410030 kubeadm.go:1113] duration metric: took 4.250259054s to wait for elevateKubeSystemPrivileges
	I0127 11:52:36.512301  410030 kubeadm.go:394] duration metric: took 14.46476068s to StartCluster
	I0127 11:52:36.512331  410030 settings.go:142] acquiring lock: {Name:mkb277d193c8888d23a77778c65f322a69e59091 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:52:36.512467  410030 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20319-348858/kubeconfig
	I0127 11:52:36.516653  410030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-348858/kubeconfig: {Name:mk12891275228a2835a35659c2ede45028f0a576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:52:36.516976  410030 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 11:52:36.516972  410030 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.114 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 11:52:36.517077  410030 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 11:52:36.517203  410030 addons.go:69] Setting storage-provisioner=true in profile "bridge-230154"
	I0127 11:52:36.517227  410030 addons.go:238] Setting addon storage-provisioner=true in "bridge-230154"
	I0127 11:52:36.517240  410030 config.go:182] Loaded profile config "bridge-230154": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 11:52:36.517270  410030 host.go:66] Checking if "bridge-230154" exists ...
	I0127 11:52:36.517307  410030 addons.go:69] Setting default-storageclass=true in profile "bridge-230154"
	I0127 11:52:36.517328  410030 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-230154"
	I0127 11:52:36.517801  410030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:52:36.517819  410030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:52:36.517855  410030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:52:36.517860  410030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:52:36.519326  410030 out.go:177] * Verifying Kubernetes components...
	I0127 11:52:36.520466  410030 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:52:36.537759  410030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32825
	I0127 11:52:36.538308  410030 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:52:36.538532  410030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39909
	I0127 11:52:36.538955  410030 main.go:141] libmachine: Using API Version  1
	I0127 11:52:36.538984  410030 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:52:36.539060  410030 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:52:36.539411  410030 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:52:36.539558  410030 main.go:141] libmachine: Using API Version  1
	I0127 11:52:36.539581  410030 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:52:36.539945  410030 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:52:36.539986  410030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:52:36.540037  410030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:52:36.540303  410030 main.go:141] libmachine: (bridge-230154) Calling .GetState
	I0127 11:52:36.543982  410030 addons.go:238] Setting addon default-storageclass=true in "bridge-230154"
	I0127 11:52:36.544027  410030 host.go:66] Checking if "bridge-230154" exists ...
	I0127 11:52:36.544408  410030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:52:36.544452  410030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:52:36.557799  410030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38083
	I0127 11:52:36.558329  410030 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:52:36.558879  410030 main.go:141] libmachine: Using API Version  1
	I0127 11:52:36.558897  410030 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:52:36.559224  410030 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:52:36.559412  410030 main.go:141] libmachine: (bridge-230154) Calling .GetState
	I0127 11:52:36.559996  410030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32917
	I0127 11:52:36.560556  410030 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:52:36.561039  410030 main.go:141] libmachine: Using API Version  1
	I0127 11:52:36.561051  410030 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:52:36.561110  410030 main.go:141] libmachine: (bridge-230154) Calling .DriverName
	I0127 11:52:36.561469  410030 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:52:36.561948  410030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:52:36.561991  410030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:52:36.562672  410030 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:52:36.563764  410030 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:52:36.563778  410030 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 11:52:36.563793  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHHostname
	I0127 11:52:36.567499  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:36.568057  410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
	I0127 11:52:36.568077  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:36.568247  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHPort
	I0127 11:52:36.568401  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
	I0127 11:52:36.568577  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHUsername
	I0127 11:52:36.568732  410030 sshutil.go:53] new ssh client: &{IP:192.168.61.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/id_rsa Username:docker}
	I0127 11:52:36.577540  410030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46039
	I0127 11:52:36.578011  410030 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:52:36.578548  410030 main.go:141] libmachine: Using API Version  1
	I0127 11:52:36.578571  410030 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:52:36.578891  410030 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:52:36.579083  410030 main.go:141] libmachine: (bridge-230154) Calling .GetState
	I0127 11:52:36.580470  410030 main.go:141] libmachine: (bridge-230154) Calling .DriverName
	I0127 11:52:36.580638  410030 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 11:52:36.580655  410030 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 11:52:36.580682  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHHostname
	I0127 11:52:36.583026  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:36.583362  410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
	I0127 11:52:36.583391  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:36.583573  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHPort
	I0127 11:52:36.583748  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
	I0127 11:52:36.583875  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHUsername
	I0127 11:52:36.584004  410030 sshutil.go:53] new ssh client: &{IP:192.168.61.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/id_rsa Username:docker}
	I0127 11:52:36.919631  410030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:52:36.921628  410030 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:52:36.921644  410030 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 11:52:36.988242  410030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 11:52:38.185164  410030 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.265497157s)
	I0127 11:52:38.185231  410030 main.go:141] libmachine: Making call to close driver server
	I0127 11:52:38.185230  410030 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.263561786s)
	I0127 11:52:38.185246  410030 main.go:141] libmachine: (bridge-230154) Calling .Close
	I0127 11:52:38.185289  410030 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.263612805s)
	I0127 11:52:38.185330  410030 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0127 11:52:38.185372  410030 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.197100952s)
	I0127 11:52:38.185399  410030 main.go:141] libmachine: Making call to close driver server
	I0127 11:52:38.185427  410030 main.go:141] libmachine: (bridge-230154) Calling .Close
	I0127 11:52:38.185562  410030 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:52:38.185597  410030 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:52:38.185609  410030 main.go:141] libmachine: Making call to close driver server
	I0127 11:52:38.185616  410030 main.go:141] libmachine: (bridge-230154) Calling .Close
	I0127 11:52:38.185828  410030 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:52:38.185852  410030 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:52:38.185862  410030 main.go:141] libmachine: Making call to close driver server
	I0127 11:52:38.185868  410030 main.go:141] libmachine: (bridge-230154) Calling .Close
	I0127 11:52:38.186004  410030 main.go:141] libmachine: (bridge-230154) DBG | Closing plugin on server side
	I0127 11:52:38.186048  410030 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:52:38.186069  410030 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:52:38.186075  410030 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:52:38.186079  410030 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:52:38.187011  410030 node_ready.go:35] waiting up to 15m0s for node "bridge-230154" to be "Ready" ...
	I0127 11:52:38.212873  410030 node_ready.go:49] node "bridge-230154" has status "Ready":"True"
	I0127 11:52:38.212905  410030 node_ready.go:38] duration metric: took 25.865633ms for node "bridge-230154" to be "Ready" ...
	I0127 11:52:38.212917  410030 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:52:38.214274  410030 main.go:141] libmachine: Making call to close driver server
	I0127 11:52:38.214298  410030 main.go:141] libmachine: (bridge-230154) Calling .Close
	I0127 11:52:38.214581  410030 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:52:38.214630  410030 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:52:38.214612  410030 main.go:141] libmachine: (bridge-230154) DBG | Closing plugin on server side
	I0127 11:52:38.216008  410030 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0127 11:52:38.216924  410030 addons.go:514] duration metric: took 1.699865075s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0127 11:52:38.224349  410030 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-4c298" in "kube-system" namespace to be "Ready" ...
	I0127 11:52:38.695217  410030 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-230154" context rescaled to 1 replicas
	I0127 11:52:40.231472  410030 pod_ready.go:103] pod "coredns-668d6bf9bc-4c298" in "kube-system" namespace has status "Ready":"False"
	I0127 11:52:42.732355  410030 pod_ready.go:103] pod "coredns-668d6bf9bc-4c298" in "kube-system" namespace has status "Ready":"False"
	I0127 11:52:44.230143  410030 pod_ready.go:98] pod "coredns-668d6bf9bc-4c298" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 11:52:44 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 11:52:36 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 11:52:36 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 11:52:36 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 11:52:36 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.61.114 HostIPs:[{IP:192.168.61
.114}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-01-27 11:52:36 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-01-27 11:52:37 +0000 UTC,FinishedAt:2025-01-27 11:52:43 +0000 UTC,ContainerID:containerd://c98c745d4f9edf1ff917ee47655ca1208c7e4b09a4743c10c5415ed7b2fec8bd,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:containerd://c98c745d4f9edf1ff917ee47655ca1208c7e4b09a4743c10c5415ed7b2fec8bd Started:0xc001b44fd0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001c1e130} {Name:kube-api-access-flxzd MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadO
nly:true RecursiveReadOnly:0xc001c1e140}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0127 11:52:44.230174  410030 pod_ready.go:82] duration metric: took 6.00579922s for pod "coredns-668d6bf9bc-4c298" in "kube-system" namespace to be "Ready" ...
	E0127 11:52:44.230189  410030 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-668d6bf9bc-4c298" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 11:52:44 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 11:52:36 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 11:52:36 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 11:52:36 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 11:52:36 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.6
1.114 HostIPs:[{IP:192.168.61.114}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-01-27 11:52:36 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-01-27 11:52:37 +0000 UTC,FinishedAt:2025-01-27 11:52:43 +0000 UTC,ContainerID:containerd://c98c745d4f9edf1ff917ee47655ca1208c7e4b09a4743c10c5415ed7b2fec8bd,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:containerd://c98c745d4f9edf1ff917ee47655ca1208c7e4b09a4743c10c5415ed7b2fec8bd Started:0xc001b44fd0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001c1e130} {Name:kube-api-access-flxzd MountPath:/var/run/secrets/kuber
netes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001c1e140}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0127 11:52:44.230202  410030 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-pc8xl" in "kube-system" namespace to be "Ready" ...
	I0127 11:52:44.234796  410030 pod_ready.go:93] pod "coredns-668d6bf9bc-pc8xl" in "kube-system" namespace has status "Ready":"True"
	I0127 11:52:44.234815  410030 pod_ready.go:82] duration metric: took 4.604397ms for pod "coredns-668d6bf9bc-pc8xl" in "kube-system" namespace to be "Ready" ...
	I0127 11:52:44.234823  410030 pod_ready.go:79] waiting up to 15m0s for pod "etcd-bridge-230154" in "kube-system" namespace to be "Ready" ...
	I0127 11:52:44.238759  410030 pod_ready.go:93] pod "etcd-bridge-230154" in "kube-system" namespace has status "Ready":"True"
	I0127 11:52:44.238775  410030 pod_ready.go:82] duration metric: took 3.947094ms for pod "etcd-bridge-230154" in "kube-system" namespace to be "Ready" ...
	I0127 11:52:44.238782  410030 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-bridge-230154" in "kube-system" namespace to be "Ready" ...
	I0127 11:52:45.244732  410030 pod_ready.go:93] pod "kube-apiserver-bridge-230154" in "kube-system" namespace has status "Ready":"True"
	I0127 11:52:45.244763  410030 pod_ready.go:82] duration metric: took 1.00597309s for pod "kube-apiserver-bridge-230154" in "kube-system" namespace to be "Ready" ...
	I0127 11:52:45.244778  410030 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-bridge-230154" in "kube-system" namespace to be "Ready" ...
	I0127 11:52:45.249321  410030 pod_ready.go:93] pod "kube-controller-manager-bridge-230154" in "kube-system" namespace has status "Ready":"True"
	I0127 11:52:45.249342  410030 pod_ready.go:82] duration metric: took 4.554992ms for pod "kube-controller-manager-bridge-230154" in "kube-system" namespace to be "Ready" ...
	I0127 11:52:45.249355  410030 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-5xb8t" in "kube-system" namespace to be "Ready" ...
	I0127 11:52:45.428257  410030 pod_ready.go:93] pod "kube-proxy-5xb8t" in "kube-system" namespace has status "Ready":"True"
	I0127 11:52:45.428277  410030 pod_ready.go:82] duration metric: took 178.914707ms for pod "kube-proxy-5xb8t" in "kube-system" namespace to be "Ready" ...
	I0127 11:52:45.428285  410030 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-bridge-230154" in "kube-system" namespace to be "Ready" ...
	I0127 11:52:45.829776  410030 pod_ready.go:93] pod "kube-scheduler-bridge-230154" in "kube-system" namespace has status "Ready":"True"
	I0127 11:52:45.829809  410030 pod_ready.go:82] duration metric: took 401.516042ms for pod "kube-scheduler-bridge-230154" in "kube-system" namespace to be "Ready" ...
	I0127 11:52:45.829824  410030 pod_ready.go:39] duration metric: took 7.616894592s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:52:45.829844  410030 api_server.go:52] waiting for apiserver process to appear ...
	I0127 11:52:45.829909  410030 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:52:45.845203  410030 api_server.go:72] duration metric: took 9.328191567s to wait for apiserver process to appear ...
	I0127 11:52:45.845230  410030 api_server.go:88] waiting for apiserver healthz status ...
	I0127 11:52:45.845249  410030 api_server.go:253] Checking apiserver healthz at https://192.168.61.114:8443/healthz ...
	I0127 11:52:45.849548  410030 api_server.go:279] https://192.168.61.114:8443/healthz returned 200:
	ok
	I0127 11:52:45.850315  410030 api_server.go:141] control plane version: v1.32.1
	I0127 11:52:45.850339  410030 api_server.go:131] duration metric: took 5.10115ms to wait for apiserver health ...
	I0127 11:52:45.850346  410030 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 11:52:46.030070  410030 system_pods.go:59] 7 kube-system pods found
	I0127 11:52:46.030111  410030 system_pods.go:61] "coredns-668d6bf9bc-pc8xl" [45ae809c-52a6-4405-8382-d79f3d6b3e58] Running
	I0127 11:52:46.030120  410030 system_pods.go:61] "etcd-bridge-230154" [64ffad49-a1cc-4273-a76f-27829ec98715] Running
	I0127 11:52:46.030127  410030 system_pods.go:61] "kube-apiserver-bridge-230154" [ef8b8909-ce47-4280-a8ad-1c3dcd14e862] Running
	I0127 11:52:46.030142  410030 system_pods.go:61] "kube-controller-manager-bridge-230154" [c8aff057-390a-474d-9436-1bdcc79bd8de] Running
	I0127 11:52:46.030149  410030 system_pods.go:61] "kube-proxy-5xb8t" [bf62bfa5-b098-442e-b13c-2a041c874c50] Running
	I0127 11:52:46.030159  410030 system_pods.go:61] "kube-scheduler-bridge-230154" [da73974e-b55e-400b-a078-0903bc8b7285] Running
	I0127 11:52:46.030169  410030 system_pods.go:61] "storage-provisioner" [58b2ed51-7586-457e-a455-1a52afbcc2fd] Running
	I0127 11:52:46.030181  410030 system_pods.go:74] duration metric: took 179.827627ms to wait for pod list to return data ...
	I0127 11:52:46.030196  410030 default_sa.go:34] waiting for default service account to be created ...
	I0127 11:52:46.228329  410030 default_sa.go:45] found service account: "default"
	I0127 11:52:46.228364  410030 default_sa.go:55] duration metric: took 198.158482ms for default service account to be created ...
	I0127 11:52:46.228375  410030 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 11:52:46.430997  410030 system_pods.go:87] 7 kube-system pods found
	I0127 11:52:46.630596  410030 system_pods.go:105] "coredns-668d6bf9bc-pc8xl" [45ae809c-52a6-4405-8382-d79f3d6b3e58] Running
	I0127 11:52:46.630617  410030 system_pods.go:105] "etcd-bridge-230154" [64ffad49-a1cc-4273-a76f-27829ec98715] Running
	I0127 11:52:46.630623  410030 system_pods.go:105] "kube-apiserver-bridge-230154" [ef8b8909-ce47-4280-a8ad-1c3dcd14e862] Running
	I0127 11:52:46.630628  410030 system_pods.go:105] "kube-controller-manager-bridge-230154" [c8aff057-390a-474d-9436-1bdcc79bd8de] Running
	I0127 11:52:46.630632  410030 system_pods.go:105] "kube-proxy-5xb8t" [bf62bfa5-b098-442e-b13c-2a041c874c50] Running
	I0127 11:52:46.630636  410030 system_pods.go:105] "kube-scheduler-bridge-230154" [da73974e-b55e-400b-a078-0903bc8b7285] Running
	I0127 11:52:46.630640  410030 system_pods.go:105] "storage-provisioner" [58b2ed51-7586-457e-a455-1a52afbcc2fd] Running
	I0127 11:52:46.630649  410030 system_pods.go:147] duration metric: took 402.266545ms to wait for k8s-apps to be running ...
	I0127 11:52:46.630655  410030 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 11:52:46.630700  410030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:52:46.647032  410030 system_svc.go:56] duration metric: took 16.365202ms WaitForService to wait for kubelet
	I0127 11:52:46.647063  410030 kubeadm.go:582] duration metric: took 10.130054313s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 11:52:46.647088  410030 node_conditions.go:102] verifying NodePressure condition ...
	I0127 11:52:46.828212  410030 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 11:52:46.828240  410030 node_conditions.go:123] node cpu capacity is 2
	I0127 11:52:46.828255  410030 node_conditions.go:105] duration metric: took 181.16132ms to run NodePressure ...
	I0127 11:52:46.828269  410030 start.go:241] waiting for startup goroutines ...
	I0127 11:52:46.828280  410030 start.go:246] waiting for cluster config update ...
	I0127 11:52:46.828295  410030 start.go:255] writing updated cluster config ...
	I0127 11:52:46.828597  410030 ssh_runner.go:195] Run: rm -f paused
	I0127 11:52:46.879719  410030 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 11:52:46.881278  410030 out.go:177] * Done! kubectl is now configured to use "bridge-230154" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	4a33d428f4f77       523cad1a4df73       4 seconds ago       Exited              dashboard-metrics-scraper   9                   45045ebd5057a       dashboard-metrics-scraper-86c6bf9756-k2swq
	b9368a2abd7ba       07655ddf2eebe       21 minutes ago      Running             kubernetes-dashboard        0                   4d8859601631b       kubernetes-dashboard-7779f9b69b-2c6kt
	c005698c25d45       6e38f40d628db       21 minutes ago      Running             storage-provisioner         0                   3e1c9c73a2968       storage-provisioner
	4eef6bae239b9       c69fa2e9cbf5f       21 minutes ago      Running             coredns                     0                   c93ac2efa24fb       coredns-668d6bf9bc-5cktj
	6db8e2b9dbed6       c69fa2e9cbf5f       21 minutes ago      Running             coredns                     0                   7b054360c4744       coredns-668d6bf9bc-kjqjk
	f936328e91f32       e29f9c7391fd9       21 minutes ago      Running             kube-proxy                  0                   01916883da50b       kube-proxy-44m77
	bb73d9fe3729d       a9e7e6b294baf       21 minutes ago      Running             etcd                        2                   89c7c8b36c50d       etcd-no-preload-976043
	765852d6ddf17       2b0d6572d062c       21 minutes ago      Running             kube-scheduler              2                   db4bdfdbdadbe       kube-scheduler-no-preload-976043
	aaea52032a210       95c0bda56fc4d       21 minutes ago      Running             kube-apiserver              2                   809e61c50c175       kube-apiserver-no-preload-976043
	4fafe9b41d24a       019ee182b58e2       21 minutes ago      Running             kube-controller-manager     2                   3b1b86b7b9e65       kube-controller-manager-no-preload-976043
	
	
	==> containerd <==
	Jan 27 12:03:08 no-preload-976043 containerd[559]: time="2025-01-27T12:03:08.861997046Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Jan 27 12:03:08 no-preload-976043 containerd[559]: time="2025-01-27T12:03:08.864171324Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Jan 27 12:03:08 no-preload-976043 containerd[559]: time="2025-01-27T12:03:08.864261186Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 27 12:03:26 no-preload-976043 containerd[559]: time="2025-01-27T12:03:26.854027820Z" level=info msg="CreateContainer within sandbox \"45045ebd5057a80801127322387e6020ed1b9d72cd06260400445ee1c56bfb57\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,}"
	Jan 27 12:03:26 no-preload-976043 containerd[559]: time="2025-01-27T12:03:26.892407230Z" level=info msg="CreateContainer within sandbox \"45045ebd5057a80801127322387e6020ed1b9d72cd06260400445ee1c56bfb57\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,} returns container id \"bbdc6a1cad758c5db919b30fd742693c6e9d46a4048b2e6f4aca646a3a9ed0bb\""
	Jan 27 12:03:26 no-preload-976043 containerd[559]: time="2025-01-27T12:03:26.893654754Z" level=info msg="StartContainer for \"bbdc6a1cad758c5db919b30fd742693c6e9d46a4048b2e6f4aca646a3a9ed0bb\""
	Jan 27 12:03:26 no-preload-976043 containerd[559]: time="2025-01-27T12:03:26.975184957Z" level=info msg="StartContainer for \"bbdc6a1cad758c5db919b30fd742693c6e9d46a4048b2e6f4aca646a3a9ed0bb\" returns successfully"
	Jan 27 12:03:27 no-preload-976043 containerd[559]: time="2025-01-27T12:03:27.022213505Z" level=info msg="shim disconnected" id=bbdc6a1cad758c5db919b30fd742693c6e9d46a4048b2e6f4aca646a3a9ed0bb namespace=k8s.io
	Jan 27 12:03:27 no-preload-976043 containerd[559]: time="2025-01-27T12:03:27.022361690Z" level=warning msg="cleaning up after shim disconnected" id=bbdc6a1cad758c5db919b30fd742693c6e9d46a4048b2e6f4aca646a3a9ed0bb namespace=k8s.io
	Jan 27 12:03:27 no-preload-976043 containerd[559]: time="2025-01-27T12:03:27.022450948Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 27 12:03:27 no-preload-976043 containerd[559]: time="2025-01-27T12:03:27.555124668Z" level=info msg="RemoveContainer for \"1f0e08ad11074a1d1459ebc1363490d304fb38d6d2ff3731ae14d271c8eb0fa7\""
	Jan 27 12:03:27 no-preload-976043 containerd[559]: time="2025-01-27T12:03:27.564200708Z" level=info msg="RemoveContainer for \"1f0e08ad11074a1d1459ebc1363490d304fb38d6d2ff3731ae14d271c8eb0fa7\" returns successfully"
	Jan 27 12:08:10 no-preload-976043 containerd[559]: time="2025-01-27T12:08:10.855343700Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 12:08:10 no-preload-976043 containerd[559]: time="2025-01-27T12:08:10.871026681Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Jan 27 12:08:10 no-preload-976043 containerd[559]: time="2025-01-27T12:08:10.873079989Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Jan 27 12:08:10 no-preload-976043 containerd[559]: time="2025-01-27T12:08:10.873168589Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 27 12:08:28 no-preload-976043 containerd[559]: time="2025-01-27T12:08:28.853808420Z" level=info msg="CreateContainer within sandbox \"45045ebd5057a80801127322387e6020ed1b9d72cd06260400445ee1c56bfb57\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,}"
	Jan 27 12:08:28 no-preload-976043 containerd[559]: time="2025-01-27T12:08:28.875535910Z" level=info msg="CreateContainer within sandbox \"45045ebd5057a80801127322387e6020ed1b9d72cd06260400445ee1c56bfb57\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,} returns container id \"4a33d428f4f77088c0fe4dc8dc83c37f2b94ad54fa3c966ba47b67e1e3be5b30\""
	Jan 27 12:08:28 no-preload-976043 containerd[559]: time="2025-01-27T12:08:28.876598002Z" level=info msg="StartContainer for \"4a33d428f4f77088c0fe4dc8dc83c37f2b94ad54fa3c966ba47b67e1e3be5b30\""
	Jan 27 12:08:28 no-preload-976043 containerd[559]: time="2025-01-27T12:08:28.949858479Z" level=info msg="StartContainer for \"4a33d428f4f77088c0fe4dc8dc83c37f2b94ad54fa3c966ba47b67e1e3be5b30\" returns successfully"
	Jan 27 12:08:28 no-preload-976043 containerd[559]: time="2025-01-27T12:08:28.995611575Z" level=info msg="shim disconnected" id=4a33d428f4f77088c0fe4dc8dc83c37f2b94ad54fa3c966ba47b67e1e3be5b30 namespace=k8s.io
	Jan 27 12:08:28 no-preload-976043 containerd[559]: time="2025-01-27T12:08:28.995768308Z" level=warning msg="cleaning up after shim disconnected" id=4a33d428f4f77088c0fe4dc8dc83c37f2b94ad54fa3c966ba47b67e1e3be5b30 namespace=k8s.io
	Jan 27 12:08:28 no-preload-976043 containerd[559]: time="2025-01-27T12:08:28.995872790Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 27 12:08:29 no-preload-976043 containerd[559]: time="2025-01-27T12:08:29.246869050Z" level=info msg="RemoveContainer for \"bbdc6a1cad758c5db919b30fd742693c6e9d46a4048b2e6f4aca646a3a9ed0bb\""
	Jan 27 12:08:29 no-preload-976043 containerd[559]: time="2025-01-27T12:08:29.252119353Z" level=info msg="RemoveContainer for \"bbdc6a1cad758c5db919b30fd742693c6e9d46a4048b2e6f4aca646a3a9ed0bb\" returns successfully"
	
	
	==> coredns [4eef6bae239b90f4992d4b21636d91a4816334e40d073853f0c610ca8e6ff0ba] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [6db8e2b9dbed6e543ea5749ee7b922719309f1e0d1601d1c22528d4d9567869f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-976043
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-976043
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35c230aa12d4986001aef5f6e29069f3bc5493aa
	                    minikube.k8s.io/name=no-preload-976043
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T11_47_02_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 11:46:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-976043
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 12:08:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 12:04:55 +0000   Mon, 27 Jan 2025 11:46:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 12:04:55 +0000   Mon, 27 Jan 2025 11:46:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 12:04:55 +0000   Mon, 27 Jan 2025 11:46:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 12:04:55 +0000   Mon, 27 Jan 2025 11:46:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.171
	  Hostname:    no-preload-976043
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 aada06ce10ef4ecdbcc624ca12030b51
	  System UUID:                aada06ce-10ef-4ecd-bcc6-24ca12030b51
	  Boot ID:                    26eb3504-eb5c-421e-85ec-5c0bf85b4166
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-5cktj                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 coredns-668d6bf9bc-kjqjk                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-no-preload-976043                        100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-no-preload-976043              250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-no-preload-976043     200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-44m77                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-no-preload-976043              100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-f79f97bbb-cxprr                100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-k2swq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-2c6kt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 21m   kube-proxy       
	  Normal  Starting                 21m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m   kubelet          Node no-preload-976043 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m   kubelet          Node no-preload-976043 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m   kubelet          Node no-preload-976043 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m   node-controller  Node no-preload-976043 event: Registered Node no-preload-976043 in Controller
	
	
	==> dmesg <==
	[  +0.041941] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.298744] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.922659] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.611846] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.702018] systemd-fstab-generator[483]: Ignoring "noauto" option for root device
	[  +0.064658] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072603] systemd-fstab-generator[495]: Ignoring "noauto" option for root device
	[  +0.167173] systemd-fstab-generator[509]: Ignoring "noauto" option for root device
	[  +0.161443] systemd-fstab-generator[521]: Ignoring "noauto" option for root device
	[  +0.332995] systemd-fstab-generator[551]: Ignoring "noauto" option for root device
	[  +1.385910] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +2.283888] systemd-fstab-generator[754]: Ignoring "noauto" option for root device
	[  +0.967574] kauditd_printk_skb: 225 callbacks suppressed
	[  +5.023864] kauditd_printk_skb: 40 callbacks suppressed
	[  +8.365895] kauditd_printk_skb: 80 callbacks suppressed
	[Jan27 11:46] systemd-fstab-generator[3032]: Ignoring "noauto" option for root device
	[Jan27 11:47] systemd-fstab-generator[3396]: Ignoring "noauto" option for root device
	[  +0.087270] kauditd_printk_skb: 87 callbacks suppressed
	[  +5.383394] systemd-fstab-generator[3500]: Ignoring "noauto" option for root device
	[  +0.141338] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.024114] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.061162] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.052190] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [bb73d9fe3729da81efffc2bbac4d8fed9055414e43f045b96bdc838a83b600eb] <==
	{"level":"info","ts":"2025-01-27T11:50:29.173841Z","caller":"traceutil/trace.go:171","msg":"trace[1667898271] transaction","detail":"{read_only:false; response_revision:751; number_of_response:1; }","duration":"456.979186ms","start":"2025-01-27T11:50:28.716848Z","end":"2025-01-27T11:50:29.173827Z","steps":["trace[1667898271] 'process raft request'  (duration: 357.258956ms)","trace[1667898271] 'compare'  (duration: 96.956113ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T11:50:29.174336Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T11:50:28.716824Z","time spent":"457.36293ms","remote":"127.0.0.1:48226","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4482,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-k2swq\" mod_revision:663 > success:<request_put:<key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-k2swq\" value_size:4396 >> failure:<request_range:<key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-k2swq\" > >"}
	{"level":"warn","ts":"2025-01-27T11:50:29.173390Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T11:50:28.846088Z","time spent":"326.778742ms","remote":"127.0.0.1:48226","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-01-27T11:50:29.675416Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"229.801592ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T11:50:29.675644Z","caller":"traceutil/trace.go:171","msg":"trace[1160123513] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:751; }","duration":"230.075684ms","start":"2025-01-27T11:50:29.445549Z","end":"2025-01-27T11:50:29.675625Z","steps":["trace[1160123513] 'range keys from in-memory index tree'  (duration: 229.734866ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T11:50:29.676823Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.208841ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T11:50:29.676903Z","caller":"traceutil/trace.go:171","msg":"trace[552255097] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:751; }","duration":"187.287286ms","start":"2025-01-27T11:50:29.489596Z","end":"2025-01-27T11:50:29.676884Z","steps":["trace[552255097] 'range keys from in-memory index tree'  (duration: 185.856232ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T11:51:18.349064Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.29434ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T11:51:18.349214Z","caller":"traceutil/trace.go:171","msg":"trace[766871959] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:801; }","duration":"105.497777ms","start":"2025-01-27T11:51:18.243663Z","end":"2025-01-27T11:51:18.349160Z","steps":["trace[766871959] 'range keys from in-memory index tree'  (duration: 105.242856ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T11:52:20.780108Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.055559ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T11:52:20.780824Z","caller":"traceutil/trace.go:171","msg":"trace[650371020] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:860; }","duration":"138.762728ms","start":"2025-01-27T11:52:20.641997Z","end":"2025-01-27T11:52:20.780759Z","steps":["trace[650371020] 'range keys from in-memory index tree'  (duration: 137.948442ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T11:52:21.425865Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.203102ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2738070545962608430 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.72.171\" mod_revision:850 > success:<request_put:<key:\"/registry/masterleases/192.168.72.171\" value_size:67 lease:2738070545962608428 >> failure:<request_range:<key:\"/registry/masterleases/192.168.72.171\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-01-27T11:52:21.426418Z","caller":"traceutil/trace.go:171","msg":"trace[1573745400] linearizableReadLoop","detail":"{readStateIndex:939; appliedIndex:938; }","duration":"185.17421ms","start":"2025-01-27T11:52:21.241150Z","end":"2025-01-27T11:52:21.426325Z","steps":["trace[1573745400] 'read index received'  (duration: 57.266863ms)","trace[1573745400] 'applied index is now lower than readState.Index'  (duration: 127.905786ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T11:52:21.426615Z","caller":"traceutil/trace.go:171","msg":"trace[1380050990] transaction","detail":"{read_only:false; response_revision:861; number_of_response:1; }","duration":"256.249513ms","start":"2025-01-27T11:52:21.170346Z","end":"2025-01-27T11:52:21.426595Z","steps":["trace[1380050990] 'process raft request'  (duration: 128.167227ms)","trace[1380050990] 'compare'  (duration: 127.091778ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T11:52:21.427502Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.99366ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T11:52:21.428091Z","caller":"traceutil/trace.go:171","msg":"trace[1365611756] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:861; }","duration":"186.952002ms","start":"2025-01-27T11:52:21.241129Z","end":"2025-01-27T11:52:21.428081Z","steps":["trace[1365611756] 'agreement among raft nodes before linearized reading'  (duration: 185.602495ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T11:56:58.051917Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":837}
	{"level":"info","ts":"2025-01-27T11:56:58.095736Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":837,"took":"42.699334ms","hash":2742367256,"current-db-size-bytes":3031040,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":3031040,"current-db-size-in-use":"3.0 MB"}
	{"level":"info","ts":"2025-01-27T11:56:58.095879Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2742367256,"revision":837,"compact-revision":-1}
	{"level":"info","ts":"2025-01-27T12:01:58.059293Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1089}
	{"level":"info","ts":"2025-01-27T12:01:58.063974Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1089,"took":"3.718738ms","hash":81811223,"current-db-size-bytes":3031040,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":1773568,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-27T12:01:58.064150Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":81811223,"revision":1089,"compact-revision":837}
	{"level":"info","ts":"2025-01-27T12:06:58.065082Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1340}
	{"level":"info","ts":"2025-01-27T12:06:58.069083Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1340,"took":"3.489878ms","hash":2963485821,"current-db-size-bytes":3031040,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":1794048,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-27T12:06:58.069163Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2963485821,"revision":1340,"compact-revision":1089}
	
	
	==> kernel <==
	 12:08:34 up 26 min,  0 users,  load average: 0.28, 0.26, 0.20
	Linux no-preload-976043 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [aaea52032a21044a8697f1cf67f1a61c3d4078d96bae383657162aa6dfe46e4c] <==
	 > logger="UnhandledError"
	I0127 12:05:00.468043       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 12:06:59.467125       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:06:59.467400       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0127 12:07:00.469666       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:07:00.469733       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0127 12:07:00.469763       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:07:00.470053       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 12:07:00.471115       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 12:07:00.471146       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 12:08:00.472046       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:08:00.472268       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0127 12:08:00.472180       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:08:00.472624       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0127 12:08:00.473797       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 12:08:00.473868       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [4fafe9b41d24a0b36339d5cb43a3023744ee747d9f0d780743ce9cc91f21e4b7] <==
	I0127 12:03:32.863007       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="39.273µs"
	E0127 12:03:36.228202       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:03:36.279705       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:04:06.234112       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:04:06.287367       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:04:36.240589       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:04:36.294322       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 12:04:55.161696       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-976043"
	E0127 12:05:06.246599       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:05:06.300552       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:05:36.252919       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:05:36.308138       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:06:06.261277       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:06:06.317257       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:06:36.269445       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:06:36.325273       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:07:06.276054       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:07:06.332056       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:07:36.281988       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:07:36.338522       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:08:06.290040       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:08:06.346210       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 12:08:25.869906       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="119.632µs"
	I0127 12:08:29.264385       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="60.153µs"
	I0127 12:08:31.994118       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="43.466µs"
	
	
	==> kube-proxy [f936328e91f32ea805970efb2793e458dc0b62c4c3de292ca1926ef86e0773f6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 11:47:08.286258       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 11:47:08.326710       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.72.171"]
	E0127 11:47:08.326839       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 11:47:08.518150       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 11:47:08.518198       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 11:47:08.518224       1 server_linux.go:170] "Using iptables Proxier"
	I0127 11:47:08.523143       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 11:47:08.527631       1 server.go:497] "Version info" version="v1.32.1"
	I0127 11:47:08.527663       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 11:47:08.532438       1 config.go:199] "Starting service config controller"
	I0127 11:47:08.532547       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 11:47:08.532586       1 config.go:105] "Starting endpoint slice config controller"
	I0127 11:47:08.532592       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 11:47:08.533130       1 config.go:329] "Starting node config controller"
	I0127 11:47:08.533140       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 11:47:08.636582       1 shared_informer.go:320] Caches are synced for node config
	I0127 11:47:08.636629       1 shared_informer.go:320] Caches are synced for service config
	I0127 11:47:08.636638       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [765852d6ddf176224b3ad9dbebd8640d778f3694ba556d6351fa92740cfd5c40] <==
	W0127 11:46:59.490179       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 11:46:59.490212       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:46:59.490417       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0127 11:46:59.490494       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:00.338393       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0127 11:47:00.338440       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:00.363620       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0127 11:47:00.363672       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:00.461212       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0127 11:47:00.461282       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:00.467537       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 11:47:00.467593       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:00.501400       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0127 11:47:00.501527       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:00.568080       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 11:47:00.568389       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:00.578072       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 11:47:00.578661       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:00.605529       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 11:47:00.605830       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:00.643250       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0127 11:47:00.643730       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:00.700651       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 11:47:00.700895       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0127 11:47:03.372561       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 12:07:37 no-preload-976043 kubelet[3403]: E0127 12:07:37.850846    3403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-k2swq_kubernetes-dashboard(50c34ab2-9bca-4cc6-a360-5de0898bfab9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-k2swq" podUID="50c34ab2-9bca-4cc6-a360-5de0898bfab9"
	Jan 27 12:07:40 no-preload-976043 kubelet[3403]: E0127 12:07:40.850994    3403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-cxprr" podUID="fcf4fd1c-5cc8-43ab-a46a-32c4f5559168"
	Jan 27 12:07:49 no-preload-976043 kubelet[3403]: I0127 12:07:49.852785    3403 scope.go:117] "RemoveContainer" containerID="bbdc6a1cad758c5db919b30fd742693c6e9d46a4048b2e6f4aca646a3a9ed0bb"
	Jan 27 12:07:49 no-preload-976043 kubelet[3403]: E0127 12:07:49.853243    3403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-k2swq_kubernetes-dashboard(50c34ab2-9bca-4cc6-a360-5de0898bfab9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-k2swq" podUID="50c34ab2-9bca-4cc6-a360-5de0898bfab9"
	Jan 27 12:07:55 no-preload-976043 kubelet[3403]: E0127 12:07:55.853285    3403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-cxprr" podUID="fcf4fd1c-5cc8-43ab-a46a-32c4f5559168"
	Jan 27 12:08:01 no-preload-976043 kubelet[3403]: E0127 12:08:01.878224    3403 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 12:08:01 no-preload-976043 kubelet[3403]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 12:08:01 no-preload-976043 kubelet[3403]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 12:08:01 no-preload-976043 kubelet[3403]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 12:08:01 no-preload-976043 kubelet[3403]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 12:08:03 no-preload-976043 kubelet[3403]: I0127 12:08:03.850704    3403 scope.go:117] "RemoveContainer" containerID="bbdc6a1cad758c5db919b30fd742693c6e9d46a4048b2e6f4aca646a3a9ed0bb"
	Jan 27 12:08:03 no-preload-976043 kubelet[3403]: E0127 12:08:03.852958    3403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-k2swq_kubernetes-dashboard(50c34ab2-9bca-4cc6-a360-5de0898bfab9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-k2swq" podUID="50c34ab2-9bca-4cc6-a360-5de0898bfab9"
	Jan 27 12:08:10 no-preload-976043 kubelet[3403]: E0127 12:08:10.873533    3403 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 27 12:08:10 no-preload-976043 kubelet[3403]: E0127 12:08:10.874306    3403 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 27 12:08:10 no-preload-976043 kubelet[3403]: E0127 12:08:10.874870    3403 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jw9tj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:
nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdi
n:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-f79f97bbb-cxprr_kube-system(fcf4fd1c-5cc8-43ab-a46a-32c4f5559168): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Jan 27 12:08:10 no-preload-976043 kubelet[3403]: E0127 12:08:10.876703    3403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-cxprr" podUID="fcf4fd1c-5cc8-43ab-a46a-32c4f5559168"
	Jan 27 12:08:14 no-preload-976043 kubelet[3403]: I0127 12:08:14.850209    3403 scope.go:117] "RemoveContainer" containerID="bbdc6a1cad758c5db919b30fd742693c6e9d46a4048b2e6f4aca646a3a9ed0bb"
	Jan 27 12:08:14 no-preload-976043 kubelet[3403]: E0127 12:08:14.850395    3403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-k2swq_kubernetes-dashboard(50c34ab2-9bca-4cc6-a360-5de0898bfab9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-k2swq" podUID="50c34ab2-9bca-4cc6-a360-5de0898bfab9"
	Jan 27 12:08:25 no-preload-976043 kubelet[3403]: E0127 12:08:25.850968    3403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-cxprr" podUID="fcf4fd1c-5cc8-43ab-a46a-32c4f5559168"
	Jan 27 12:08:28 no-preload-976043 kubelet[3403]: I0127 12:08:28.850412    3403 scope.go:117] "RemoveContainer" containerID="bbdc6a1cad758c5db919b30fd742693c6e9d46a4048b2e6f4aca646a3a9ed0bb"
	Jan 27 12:08:29 no-preload-976043 kubelet[3403]: I0127 12:08:29.244277    3403 scope.go:117] "RemoveContainer" containerID="bbdc6a1cad758c5db919b30fd742693c6e9d46a4048b2e6f4aca646a3a9ed0bb"
	Jan 27 12:08:29 no-preload-976043 kubelet[3403]: I0127 12:08:29.244918    3403 scope.go:117] "RemoveContainer" containerID="4a33d428f4f77088c0fe4dc8dc83c37f2b94ad54fa3c966ba47b67e1e3be5b30"
	Jan 27 12:08:29 no-preload-976043 kubelet[3403]: E0127 12:08:29.245151    3403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-k2swq_kubernetes-dashboard(50c34ab2-9bca-4cc6-a360-5de0898bfab9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-k2swq" podUID="50c34ab2-9bca-4cc6-a360-5de0898bfab9"
	Jan 27 12:08:31 no-preload-976043 kubelet[3403]: I0127 12:08:31.976638    3403 scope.go:117] "RemoveContainer" containerID="4a33d428f4f77088c0fe4dc8dc83c37f2b94ad54fa3c966ba47b67e1e3be5b30"
	Jan 27 12:08:31 no-preload-976043 kubelet[3403]: E0127 12:08:31.976862    3403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-k2swq_kubernetes-dashboard(50c34ab2-9bca-4cc6-a360-5de0898bfab9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-k2swq" podUID="50c34ab2-9bca-4cc6-a360-5de0898bfab9"
	
	
	==> kubernetes-dashboard [b9368a2abd7ba22861a95efcc12a6cc204126f8ea0ff3e0ccd83405833df76a9] <==
	2025/01/27 11:56:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 11:56:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 11:57:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 11:57:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 11:58:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 11:58:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 11:59:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 11:59:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:00:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:00:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:01:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:01:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:02:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:02:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:03:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:03:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:04:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:04:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:05:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:05:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:06:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:06:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:07:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:07:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:08:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [c005698c25d4503489975c78a07c506bad86865b449b9f2471a3f1bf1c7fc878] <==
	I0127 11:47:09.725684       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 11:47:09.739243       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 11:47:09.739332       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 11:47:09.754757       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 11:47:09.755961       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-976043_a13ba666-369f-4b7e-a067-7b35fb475696!
	I0127 11:47:09.760243       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bac9279e-01a7-4e0c-b034-618db64da2f3", APIVersion:"v1", ResourceVersion:"405", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-976043_a13ba666-369f-4b7e-a067-7b35fb475696 became leader
	I0127 11:47:09.856548       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-976043_a13ba666-369f-4b7e-a067-7b35fb475696!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-976043 -n no-preload-976043
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-976043 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-cxprr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-976043 describe pod metrics-server-f79f97bbb-cxprr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-976043 describe pod metrics-server-f79f97bbb-cxprr: exit status 1 (68.988158ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-cxprr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-976043 describe pod metrics-server-f79f97bbb-cxprr: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (1591.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (7200.057s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-259716 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 11:43:22.837337  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/functional-430173/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p default-k8s-diff-port-259716 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: signal: killed (26m56.531626213s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-259716] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20319-348858/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-348858/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "default-k8s-diff-port-259716" primary control-plane node in "default-k8s-diff-port-259716" cluster
	* Restarting existing kvm2 VM for "default-k8s-diff-port-259716" ...
	* Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-259716 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:42:43.241919  398042 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:42:43.242070  398042 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:42:43.242078  398042 out.go:358] Setting ErrFile to fd 2...
	I0127 11:42:43.242084  398042 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:42:43.242450  398042 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-348858/.minikube/bin
	I0127 11:42:43.243260  398042 out.go:352] Setting JSON to false
	I0127 11:42:43.244649  398042 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8708,"bootTime":1737969455,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 11:42:43.244742  398042 start.go:139] virtualization: kvm guest
	I0127 11:42:43.246790  398042 out.go:177] * [default-k8s-diff-port-259716] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 11:42:43.247989  398042 out.go:177]   - MINIKUBE_LOCATION=20319
	I0127 11:42:43.248094  398042 notify.go:220] Checking for updates...
	I0127 11:42:43.250321  398042 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:42:43.251604  398042 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20319-348858/kubeconfig
	I0127 11:42:43.252901  398042 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-348858/.minikube
	I0127 11:42:43.254239  398042 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 11:42:43.255490  398042 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 11:42:43.257554  398042 config.go:182] Loaded profile config "default-k8s-diff-port-259716": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 11:42:43.258275  398042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:42:43.258380  398042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:42:43.281783  398042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33797
	I0127 11:42:43.282297  398042 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:42:43.282985  398042 main.go:141] libmachine: Using API Version  1
	I0127 11:42:43.283011  398042 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:42:43.283533  398042 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:42:43.284035  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .DriverName
	I0127 11:42:43.284279  398042 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:42:43.284693  398042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:42:43.284748  398042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:42:43.305376  398042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39681
	I0127 11:42:43.305886  398042 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:42:43.306373  398042 main.go:141] libmachine: Using API Version  1
	I0127 11:42:43.306389  398042 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:42:43.306739  398042 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:42:43.306979  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .DriverName
	I0127 11:42:43.345443  398042 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 11:42:43.346636  398042 start.go:297] selected driver: kvm2
	I0127 11:42:43.346652  398042 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-259716 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8
s-diff-port-259716 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReques
ted:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:42:43.346798  398042 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 11:42:43.347447  398042 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:42:43.347520  398042 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20319-348858/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 11:42:43.362365  398042 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 11:42:43.362852  398042 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 11:42:43.362894  398042 cni.go:84] Creating CNI manager for ""
	I0127 11:42:43.362947  398042 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 11:42:43.362995  398042 start.go:340] cluster config:
	{Name:default-k8s-diff-port-259716 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-259716 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home
/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:42:43.363145  398042 iso.go:125] acquiring lock: {Name:mk6cdd2a3d0bfb3682c1f0c806368944f23c4809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:42:43.364741  398042 out.go:177] * Starting "default-k8s-diff-port-259716" primary control-plane node in "default-k8s-diff-port-259716" cluster
	I0127 11:42:43.365786  398042 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 11:42:43.365832  398042 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20319-348858/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
	I0127 11:42:43.365844  398042 cache.go:56] Caching tarball of preloaded images
	I0127 11:42:43.365948  398042 preload.go:172] Found /home/jenkins/minikube-integration/20319-348858/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 11:42:43.365964  398042 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
	I0127 11:42:43.366117  398042 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/default-k8s-diff-port-259716/config.json ...
	I0127 11:42:43.366372  398042 start.go:360] acquireMachinesLock for default-k8s-diff-port-259716: {Name:mk69dba1a41baeb0794a28159a5cef220370e224 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 11:42:50.765802  398042 start.go:364] duration metric: took 7.399370706s to acquireMachinesLock for "default-k8s-diff-port-259716"
	I0127 11:42:50.765848  398042 start.go:96] Skipping create...Using existing machine configuration
	I0127 11:42:50.765856  398042 fix.go:54] fixHost starting: 
	I0127 11:42:50.766287  398042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:42:50.766356  398042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:42:50.784009  398042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36429
	I0127 11:42:50.784502  398042 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:42:50.785021  398042 main.go:141] libmachine: Using API Version  1
	I0127 11:42:50.785045  398042 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:42:50.785387  398042 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:42:50.785598  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .DriverName
	I0127 11:42:50.785755  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetState
	I0127 11:42:50.787180  398042 fix.go:112] recreateIfNeeded on default-k8s-diff-port-259716: state=Stopped err=<nil>
	I0127 11:42:50.787215  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .DriverName
	W0127 11:42:50.787384  398042 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 11:42:50.789375  398042 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-259716" ...
	I0127 11:42:50.790666  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .Start
	I0127 11:42:50.790836  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) starting domain...
	I0127 11:42:50.790856  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) ensuring networks are active...
	I0127 11:42:50.791451  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Ensuring network default is active
	I0127 11:42:50.791754  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Ensuring network mk-default-k8s-diff-port-259716 is active
	I0127 11:42:50.792230  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) getting domain XML...
	I0127 11:42:50.793143  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) creating domain...
	I0127 11:42:51.145325  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) waiting for IP...
	I0127 11:42:51.146536  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:42:51.147138  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | unable to find current IP address of domain default-k8s-diff-port-259716 in network mk-default-k8s-diff-port-259716
	I0127 11:42:51.147240  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | I0127 11:42:51.147132  398129 retry.go:31] will retry after 209.533991ms: waiting for domain to come up
	I0127 11:42:51.358895  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:42:51.359738  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | unable to find current IP address of domain default-k8s-diff-port-259716 in network mk-default-k8s-diff-port-259716
	I0127 11:42:51.359787  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | I0127 11:42:51.359711  398129 retry.go:31] will retry after 275.828321ms: waiting for domain to come up
	I0127 11:42:51.637306  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:42:51.637990  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | unable to find current IP address of domain default-k8s-diff-port-259716 in network mk-default-k8s-diff-port-259716
	I0127 11:42:51.638019  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | I0127 11:42:51.637958  398129 retry.go:31] will retry after 411.073724ms: waiting for domain to come up
	I0127 11:42:52.050600  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:42:52.051206  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | unable to find current IP address of domain default-k8s-diff-port-259716 in network mk-default-k8s-diff-port-259716
	I0127 11:42:52.051233  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | I0127 11:42:52.051169  398129 retry.go:31] will retry after 568.030589ms: waiting for domain to come up
	I0127 11:42:52.620853  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:42:52.621335  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | unable to find current IP address of domain default-k8s-diff-port-259716 in network mk-default-k8s-diff-port-259716
	I0127 11:42:52.621356  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | I0127 11:42:52.621299  398129 retry.go:31] will retry after 648.462889ms: waiting for domain to come up
	I0127 11:42:53.272013  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:42:53.272755  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | unable to find current IP address of domain default-k8s-diff-port-259716 in network mk-default-k8s-diff-port-259716
	I0127 11:42:53.272785  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | I0127 11:42:53.272743  398129 retry.go:31] will retry after 753.606143ms: waiting for domain to come up
	I0127 11:42:54.027531  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:42:54.028133  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | unable to find current IP address of domain default-k8s-diff-port-259716 in network mk-default-k8s-diff-port-259716
	I0127 11:42:54.028165  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | I0127 11:42:54.028081  398129 retry.go:31] will retry after 1.064987323s: waiting for domain to come up
	I0127 11:42:55.094375  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:42:55.094945  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | unable to find current IP address of domain default-k8s-diff-port-259716 in network mk-default-k8s-diff-port-259716
	I0127 11:42:55.094998  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | I0127 11:42:55.094899  398129 retry.go:31] will retry after 1.298554927s: waiting for domain to come up
	I0127 11:42:56.395519  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:42:56.396145  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | unable to find current IP address of domain default-k8s-diff-port-259716 in network mk-default-k8s-diff-port-259716
	I0127 11:42:56.396178  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | I0127 11:42:56.396059  398129 retry.go:31] will retry after 1.435723084s: waiting for domain to come up
	I0127 11:42:57.833165  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:42:57.833688  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | unable to find current IP address of domain default-k8s-diff-port-259716 in network mk-default-k8s-diff-port-259716
	I0127 11:42:57.833715  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | I0127 11:42:57.833647  398129 retry.go:31] will retry after 2.094029678s: waiting for domain to come up
	I0127 11:42:59.930780  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:42:59.931645  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | unable to find current IP address of domain default-k8s-diff-port-259716 in network mk-default-k8s-diff-port-259716
	I0127 11:42:59.931673  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | I0127 11:42:59.931547  398129 retry.go:31] will retry after 2.84242132s: waiting for domain to come up
	I0127 11:43:02.775994  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:43:02.776501  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | unable to find current IP address of domain default-k8s-diff-port-259716 in network mk-default-k8s-diff-port-259716
	I0127 11:43:02.776548  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | I0127 11:43:02.776471  398129 retry.go:31] will retry after 2.261253577s: waiting for domain to come up
	I0127 11:43:05.039410  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:43:05.039998  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | unable to find current IP address of domain default-k8s-diff-port-259716 in network mk-default-k8s-diff-port-259716
	I0127 11:43:05.040031  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | I0127 11:43:05.039947  398129 retry.go:31] will retry after 3.993724664s: waiting for domain to come up
	I0127 11:43:09.037812  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:43:09.038367  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) found domain IP: 192.168.39.215
	I0127 11:43:09.038398  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) reserving static IP address...
	I0127 11:43:09.038412  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has current primary IP address 192.168.39.215 and MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:43:09.038943  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-259716", mac: "52:54:00:d7:b5:51", ip: "192.168.39.215"} in network mk-default-k8s-diff-port-259716: {Iface:virbr1 ExpiryTime:2025-01-27 12:43:03 +0000 UTC Type:0 Mac:52:54:00:d7:b5:51 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-259716 Clientid:01:52:54:00:d7:b5:51}
	I0127 11:43:09.038977  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) reserved static IP address 192.168.39.215 for domain default-k8s-diff-port-259716
	I0127 11:43:09.039000  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | skip adding static IP to network mk-default-k8s-diff-port-259716 - found existing host DHCP lease matching {name: "default-k8s-diff-port-259716", mac: "52:54:00:d7:b5:51", ip: "192.168.39.215"}
	I0127 11:43:09.039019  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | Getting to WaitForSSH function...
	I0127 11:43:09.039032  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) waiting for SSH...
	I0127 11:43:09.041624  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:43:09.042095  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:b5:51", ip: ""} in network mk-default-k8s-diff-port-259716: {Iface:virbr1 ExpiryTime:2025-01-27 12:43:03 +0000 UTC Type:0 Mac:52:54:00:d7:b5:51 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-259716 Clientid:01:52:54:00:d7:b5:51}
	I0127 11:43:09.042117  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined IP address 192.168.39.215 and MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:43:09.042274  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | Using SSH client type: external
	I0127 11:43:09.042306  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | Using SSH private key: /home/jenkins/minikube-integration/20319-348858/.minikube/machines/default-k8s-diff-port-259716/id_rsa (-rw-------)
	I0127 11:43:09.042342  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.215 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20319-348858/.minikube/machines/default-k8s-diff-port-259716/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 11:43:09.042356  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | About to run SSH command:
	I0127 11:43:09.042394  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | exit 0
	I0127 11:43:09.170288  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | SSH cmd err, output: <nil>: 
	I0127 11:43:09.170654  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetConfigRaw
	I0127 11:43:09.171340  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetIP
	I0127 11:43:09.174462  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:43:09.174904  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:b5:51", ip: ""} in network mk-default-k8s-diff-port-259716: {Iface:virbr1 ExpiryTime:2025-01-27 12:43:03 +0000 UTC Type:0 Mac:52:54:00:d7:b5:51 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-259716 Clientid:01:52:54:00:d7:b5:51}
	I0127 11:43:09.174956  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined IP address 192.168.39.215 and MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:43:09.175243  398042 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/default-k8s-diff-port-259716/config.json ...
	I0127 11:43:09.175445  398042 machine.go:93] provisionDockerMachine start ...
	I0127 11:43:09.175469  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .DriverName
	I0127 11:43:09.175660  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHHostname
	I0127 11:43:09.177850  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:43:09.178199  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:b5:51", ip: ""} in network mk-default-k8s-diff-port-259716: {Iface:virbr1 ExpiryTime:2025-01-27 12:43:03 +0000 UTC Type:0 Mac:52:54:00:d7:b5:51 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-259716 Clientid:01:52:54:00:d7:b5:51}
	I0127 11:43:09.178245  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined IP address 192.168.39.215 and MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:43:09.178327  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHPort
	I0127 11:43:09.178520  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHKeyPath
	I0127 11:43:09.178695  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHKeyPath
	I0127 11:43:09.178855  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHUsername
	I0127 11:43:09.179071  398042 main.go:141] libmachine: Using SSH client type: native
	I0127 11:43:09.179274  398042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0127 11:43:09.179285  398042 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 11:43:09.281850  398042 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 11:43:09.281880  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetMachineName
	I0127 11:43:09.282117  398042 buildroot.go:166] provisioning hostname "default-k8s-diff-port-259716"
	I0127 11:43:09.282158  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetMachineName
	I0127 11:43:09.282345  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHHostname
	I0127 11:43:09.285235  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:43:09.285665  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:b5:51", ip: ""} in network mk-default-k8s-diff-port-259716: {Iface:virbr1 ExpiryTime:2025-01-27 12:43:03 +0000 UTC Type:0 Mac:52:54:00:d7:b5:51 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-259716 Clientid:01:52:54:00:d7:b5:51}
	I0127 11:43:09.285695  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined IP address 192.168.39.215 and MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:43:09.285865  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHPort
	I0127 11:43:09.286046  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHKeyPath
	I0127 11:43:09.286215  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHKeyPath
	I0127 11:43:09.286356  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHUsername
	I0127 11:43:09.286539  398042 main.go:141] libmachine: Using SSH client type: native
	I0127 11:43:09.286741  398042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0127 11:43:09.286754  398042 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-259716 && echo "default-k8s-diff-port-259716" | sudo tee /etc/hostname
	I0127 11:43:09.400942  398042 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-259716
	
	I0127 11:43:09.400975  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHHostname
	I0127 11:43:09.404382  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:43:09.404844  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:b5:51", ip: ""} in network mk-default-k8s-diff-port-259716: {Iface:virbr1 ExpiryTime:2025-01-27 12:43:03 +0000 UTC Type:0 Mac:52:54:00:d7:b5:51 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-259716 Clientid:01:52:54:00:d7:b5:51}
	I0127 11:43:09.404887  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined IP address 192.168.39.215 and MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:43:09.405123  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHPort
	I0127 11:43:09.405297  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHKeyPath
	I0127 11:43:09.405488  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHKeyPath
	I0127 11:43:09.405696  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHUsername
	I0127 11:43:09.405922  398042 main.go:141] libmachine: Using SSH client type: native
	I0127 11:43:09.406158  398042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0127 11:43:09.406188  398042 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-259716' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-259716/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-259716' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 11:43:09.518227  398042 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 11:43:09.518258  398042 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20319-348858/.minikube CaCertPath:/home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20319-348858/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20319-348858/.minikube}
	I0127 11:43:09.518284  398042 buildroot.go:174] setting up certificates
	I0127 11:43:09.518303  398042 provision.go:84] configureAuth start
	I0127 11:43:09.518318  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetMachineName
	I0127 11:43:09.518623  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetIP
	I0127 11:43:09.521317  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:43:09.521705  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:b5:51", ip: ""} in network mk-default-k8s-diff-port-259716: {Iface:virbr1 ExpiryTime:2025-01-27 12:43:03 +0000 UTC Type:0 Mac:52:54:00:d7:b5:51 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-259716 Clientid:01:52:54:00:d7:b5:51}
	I0127 11:43:09.521733  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined IP address 192.168.39.215 and MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:43:09.521939  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHHostname
	I0127 11:43:09.524216  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:43:09.524657  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:b5:51", ip: ""} in network mk-default-k8s-diff-port-259716: {Iface:virbr1 ExpiryTime:2025-01-27 12:43:03 +0000 UTC Type:0 Mac:52:54:00:d7:b5:51 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-259716 Clientid:01:52:54:00:d7:b5:51}
	I0127 11:43:09.524705  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined IP address 192.168.39.215 and MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:43:09.524795  398042 provision.go:143] copyHostCerts
	I0127 11:43:09.524868  398042 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-348858/.minikube/ca.pem, removing ...
	I0127 11:43:09.524892  398042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-348858/.minikube/ca.pem
	I0127 11:43:09.524962  398042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20319-348858/.minikube/ca.pem (1082 bytes)
	I0127 11:43:09.525115  398042 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-348858/.minikube/cert.pem, removing ...
	I0127 11:43:09.525130  398042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-348858/.minikube/cert.pem
	I0127 11:43:09.525155  398042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20319-348858/.minikube/cert.pem (1123 bytes)
	I0127 11:43:09.525233  398042 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-348858/.minikube/key.pem, removing ...
	I0127 11:43:09.525240  398042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-348858/.minikube/key.pem
	I0127 11:43:09.525258  398042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20319-348858/.minikube/key.pem (1679 bytes)
	I0127 11:43:09.525316  398042 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20319-348858/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-259716 san=[127.0.0.1 192.168.39.215 default-k8s-diff-port-259716 localhost minikube]
	I0127 11:43:09.842731  398042 provision.go:177] copyRemoteCerts
	I0127 11:43:09.842786  398042 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 11:43:09.842809  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHHostname
	I0127 11:43:09.845620  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:43:09.845993  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:b5:51", ip: ""} in network mk-default-k8s-diff-port-259716: {Iface:virbr1 ExpiryTime:2025-01-27 12:43:03 +0000 UTC Type:0 Mac:52:54:00:d7:b5:51 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-259716 Clientid:01:52:54:00:d7:b5:51}
	I0127 11:43:09.846030  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined IP address 192.168.39.215 and MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:43:09.846203  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHPort
	I0127 11:43:09.846369  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHKeyPath
	I0127 11:43:09.846534  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHUsername
	I0127 11:43:09.846636  398042 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/default-k8s-diff-port-259716/id_rsa Username:docker}
	I0127 11:43:09.927921  398042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0127 11:43:09.951855  398042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0127 11:43:09.975787  398042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 11:43:10.005076  398042 provision.go:87] duration metric: took 486.754587ms to configureAuth
	I0127 11:43:10.005111  398042 buildroot.go:189] setting minikube options for container-runtime
	I0127 11:43:10.005353  398042 config.go:182] Loaded profile config "default-k8s-diff-port-259716": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 11:43:10.005379  398042 machine.go:96] duration metric: took 829.919335ms to provisionDockerMachine
	I0127 11:43:10.005392  398042 start.go:293] postStartSetup for "default-k8s-diff-port-259716" (driver="kvm2")
	I0127 11:43:10.005416  398042 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 11:43:10.005482  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .DriverName
	I0127 11:43:10.005800  398042 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 11:43:10.005831  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHHostname
	I0127 11:43:10.008876  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:43:10.009291  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:b5:51", ip: ""} in network mk-default-k8s-diff-port-259716: {Iface:virbr1 ExpiryTime:2025-01-27 12:43:03 +0000 UTC Type:0 Mac:52:54:00:d7:b5:51 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-259716 Clientid:01:52:54:00:d7:b5:51}
	I0127 11:43:10.009329  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined IP address 192.168.39.215 and MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:43:10.009517  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHPort
	I0127 11:43:10.009711  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHKeyPath
	I0127 11:43:10.009858  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHUsername
	I0127 11:43:10.010041  398042 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/default-k8s-diff-port-259716/id_rsa Username:docker}
	I0127 11:43:10.096318  398042 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 11:43:10.100675  398042 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 11:43:10.100699  398042 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-348858/.minikube/addons for local assets ...
	I0127 11:43:10.100758  398042 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-348858/.minikube/files for local assets ...
	I0127 11:43:10.100835  398042 filesync.go:149] local asset: /home/jenkins/minikube-integration/20319-348858/.minikube/files/etc/ssl/certs/3562042.pem -> 3562042.pem in /etc/ssl/certs
	I0127 11:43:10.100940  398042 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 11:43:10.110464  398042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/files/etc/ssl/certs/3562042.pem --> /etc/ssl/certs/3562042.pem (1708 bytes)
	I0127 11:43:10.135022  398042 start.go:296] duration metric: took 129.610226ms for postStartSetup
	I0127 11:43:10.135063  398042 fix.go:56] duration metric: took 19.369207388s for fixHost
	I0127 11:43:10.135090  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHHostname
	I0127 11:43:10.137404  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:43:10.137829  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:b5:51", ip: ""} in network mk-default-k8s-diff-port-259716: {Iface:virbr1 ExpiryTime:2025-01-27 12:43:03 +0000 UTC Type:0 Mac:52:54:00:d7:b5:51 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-259716 Clientid:01:52:54:00:d7:b5:51}
	I0127 11:43:10.137859  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined IP address 192.168.39.215 and MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:43:10.138031  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHPort
	I0127 11:43:10.138230  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHKeyPath
	I0127 11:43:10.138349  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHKeyPath
	I0127 11:43:10.138466  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHUsername
	I0127 11:43:10.138602  398042 main.go:141] libmachine: Using SSH client type: native
	I0127 11:43:10.138810  398042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0127 11:43:10.138823  398042 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 11:43:10.238081  398042 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737978190.214923442
	
	I0127 11:43:10.238103  398042 fix.go:216] guest clock: 1737978190.214923442
	I0127 11:43:10.238112  398042 fix.go:229] Guest: 2025-01-27 11:43:10.214923442 +0000 UTC Remote: 2025-01-27 11:43:10.135068626 +0000 UTC m=+26.960292756 (delta=79.854816ms)
	I0127 11:43:10.238140  398042 fix.go:200] guest clock delta is within tolerance: 79.854816ms
	I0127 11:43:10.238154  398042 start.go:83] releasing machines lock for "default-k8s-diff-port-259716", held for 19.472324494s
	I0127 11:43:10.238183  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .DriverName
	I0127 11:43:10.238417  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetIP
	I0127 11:43:10.240873  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:43:10.241188  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:b5:51", ip: ""} in network mk-default-k8s-diff-port-259716: {Iface:virbr1 ExpiryTime:2025-01-27 12:43:03 +0000 UTC Type:0 Mac:52:54:00:d7:b5:51 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-259716 Clientid:01:52:54:00:d7:b5:51}
	I0127 11:43:10.241210  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined IP address 192.168.39.215 and MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:43:10.241377  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .DriverName
	I0127 11:43:10.241811  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .DriverName
	I0127 11:43:10.241993  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .DriverName
	I0127 11:43:10.242086  398042 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 11:43:10.242135  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHHostname
	I0127 11:43:10.242244  398042 ssh_runner.go:195] Run: cat /version.json
	I0127 11:43:10.242274  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHHostname
	I0127 11:43:10.244855  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:43:10.244886  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:43:10.245231  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:b5:51", ip: ""} in network mk-default-k8s-diff-port-259716: {Iface:virbr1 ExpiryTime:2025-01-27 12:43:03 +0000 UTC Type:0 Mac:52:54:00:d7:b5:51 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-259716 Clientid:01:52:54:00:d7:b5:51}
	I0127 11:43:10.245283  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined IP address 192.168.39.215 and MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:43:10.245341  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:b5:51", ip: ""} in network mk-default-k8s-diff-port-259716: {Iface:virbr1 ExpiryTime:2025-01-27 12:43:03 +0000 UTC Type:0 Mac:52:54:00:d7:b5:51 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-259716 Clientid:01:52:54:00:d7:b5:51}
	I0127 11:43:10.245382  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined IP address 192.168.39.215 and MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:43:10.245551  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHPort
	I0127 11:43:10.245729  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHPort
	I0127 11:43:10.245751  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHKeyPath
	I0127 11:43:10.245909  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHKeyPath
	I0127 11:43:10.245915  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHUsername
	I0127 11:43:10.246061  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHUsername
	I0127 11:43:10.246087  398042 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/default-k8s-diff-port-259716/id_rsa Username:docker}
	I0127 11:43:10.246196  398042 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/default-k8s-diff-port-259716/id_rsa Username:docker}
	I0127 11:43:10.340989  398042 ssh_runner.go:195] Run: systemctl --version
	I0127 11:43:10.347255  398042 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 11:43:10.353307  398042 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 11:43:10.353363  398042 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 11:43:10.369175  398042 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 11:43:10.369197  398042 start.go:495] detecting cgroup driver to use...
	I0127 11:43:10.369254  398042 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 11:43:10.395018  398042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 11:43:10.409398  398042 docker.go:217] disabling cri-docker service (if available) ...
	I0127 11:43:10.409454  398042 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 11:43:10.423958  398042 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 11:43:10.439662  398042 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 11:43:10.567699  398042 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 11:43:10.708502  398042 docker.go:233] disabling docker service ...
	I0127 11:43:10.708590  398042 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 11:43:10.728531  398042 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 11:43:10.746061  398042 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 11:43:10.884556  398042 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 11:43:11.011440  398042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 11:43:11.025500  398042 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 11:43:11.045641  398042 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 11:43:11.058944  398042 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 11:43:11.073461  398042 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 11:43:11.073519  398042 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 11:43:11.087931  398042 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 11:43:11.099277  398042 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 11:43:11.114299  398042 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 11:43:11.125326  398042 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 11:43:11.137240  398042 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 11:43:11.147979  398042 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 11:43:11.159225  398042 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 11:43:11.172825  398042 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 11:43:11.184173  398042 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 11:43:11.184229  398042 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 11:43:11.198319  398042 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 11:43:11.211650  398042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:43:11.343411  398042 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 11:43:11.371758  398042 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0127 11:43:11.371841  398042 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 11:43:11.377143  398042 retry.go:31] will retry after 570.698256ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0127 11:43:11.949011  398042 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 11:43:11.954835  398042 start.go:563] Will wait 60s for crictl version
	I0127 11:43:11.954887  398042 ssh_runner.go:195] Run: which crictl
	I0127 11:43:11.960245  398042 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 11:43:12.006075  398042 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0127 11:43:12.006154  398042 ssh_runner.go:195] Run: containerd --version
	I0127 11:43:12.036148  398042 ssh_runner.go:195] Run: containerd --version
	I0127 11:43:12.076577  398042 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	I0127 11:43:12.077662  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetIP
	I0127 11:43:12.080311  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:43:12.080654  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:b5:51", ip: ""} in network mk-default-k8s-diff-port-259716: {Iface:virbr1 ExpiryTime:2025-01-27 12:43:03 +0000 UTC Type:0 Mac:52:54:00:d7:b5:51 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-259716 Clientid:01:52:54:00:d7:b5:51}
	I0127 11:43:12.080684  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined IP address 192.168.39.215 and MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:43:12.080908  398042 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0127 11:43:12.085266  398042 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:43:12.099340  398042 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-259716 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-259
716 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 11:43:12.099501  398042 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 11:43:12.099572  398042 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:43:12.140357  398042 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 11:43:12.140382  398042 containerd.go:534] Images already preloaded, skipping extraction
	I0127 11:43:12.140448  398042 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:43:12.179024  398042 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 11:43:12.179048  398042 cache_images.go:84] Images are preloaded, skipping loading
	I0127 11:43:12.179056  398042 kubeadm.go:934] updating node { 192.168.39.215 8444 v1.32.1 containerd true true} ...
	I0127 11:43:12.179161  398042 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-259716 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-259716 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 11:43:12.179216  398042 ssh_runner.go:195] Run: sudo crictl info
	I0127 11:43:12.216220  398042 cni.go:84] Creating CNI manager for ""
	I0127 11:43:12.216250  398042 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 11:43:12.216263  398042 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 11:43:12.216294  398042 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.215 APIServerPort:8444 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-259716 NodeName:default-k8s-diff-port-259716 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.215"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.215 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mini
kube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 11:43:12.216466  398042 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.215
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-259716"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.215"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.215"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 11:43:12.216548  398042 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 11:43:12.228986  398042 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 11:43:12.229055  398042 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 11:43:12.238791  398042 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes)
	I0127 11:43:12.256930  398042 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 11:43:12.274249  398042 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2324 bytes)
	I0127 11:43:12.292423  398042 ssh_runner.go:195] Run: grep 192.168.39.215	control-plane.minikube.internal$ /etc/hosts
	I0127 11:43:12.296938  398042 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.215	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:43:12.309554  398042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:43:12.440930  398042 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:43:12.463989  398042 certs.go:68] Setting up /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/default-k8s-diff-port-259716 for IP: 192.168.39.215
	I0127 11:43:12.464018  398042 certs.go:194] generating shared ca certs ...
	I0127 11:43:12.464042  398042 certs.go:226] acquiring lock for ca certs: {Name:mkd458666dacb6826c0d92f860c3c2133957f34f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:43:12.464243  398042 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20319-348858/.minikube/ca.key
	I0127 11:43:12.464308  398042 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20319-348858/.minikube/proxy-client-ca.key
	I0127 11:43:12.464325  398042 certs.go:256] generating profile certs ...
	I0127 11:43:12.464502  398042 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/default-k8s-diff-port-259716/client.key
	I0127 11:43:12.464596  398042 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/default-k8s-diff-port-259716/apiserver.key.20530a15
	I0127 11:43:12.464657  398042 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/default-k8s-diff-port-259716/proxy-client.key
	I0127 11:43:12.464821  398042 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/356204.pem (1338 bytes)
	W0127 11:43:12.464868  398042 certs.go:480] ignoring /home/jenkins/minikube-integration/20319-348858/.minikube/certs/356204_empty.pem, impossibly tiny 0 bytes
	I0127 11:43:12.464883  398042 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 11:43:12.464916  398042 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca.pem (1082 bytes)
	I0127 11:43:12.464951  398042 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/cert.pem (1123 bytes)
	I0127 11:43:12.464985  398042 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/key.pem (1679 bytes)
	I0127 11:43:12.465053  398042 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-348858/.minikube/files/etc/ssl/certs/3562042.pem (1708 bytes)
	I0127 11:43:12.465828  398042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 11:43:12.519042  398042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 11:43:12.554402  398042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 11:43:12.595670  398042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 11:43:12.623688  398042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/default-k8s-diff-port-259716/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0127 11:43:12.652355  398042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/default-k8s-diff-port-259716/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 11:43:12.683861  398042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/default-k8s-diff-port-259716/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 11:43:12.713012  398042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/default-k8s-diff-port-259716/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 11:43:12.741360  398042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 11:43:12.769055  398042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/certs/356204.pem --> /usr/share/ca-certificates/356204.pem (1338 bytes)
	I0127 11:43:12.794646  398042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/files/etc/ssl/certs/3562042.pem --> /usr/share/ca-certificates/3562042.pem (1708 bytes)
	I0127 11:43:12.823156  398042 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 11:43:12.842726  398042 ssh_runner.go:195] Run: openssl version
	I0127 11:43:12.848425  398042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 11:43:12.859630  398042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:43:12.864100  398042 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:43:12.864150  398042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:43:12.869914  398042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 11:43:12.880670  398042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/356204.pem && ln -fs /usr/share/ca-certificates/356204.pem /etc/ssl/certs/356204.pem"
	I0127 11:43:12.891543  398042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356204.pem
	I0127 11:43:12.896030  398042 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 10:40 /usr/share/ca-certificates/356204.pem
	I0127 11:43:12.896067  398042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356204.pem
	I0127 11:43:12.902029  398042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/356204.pem /etc/ssl/certs/51391683.0"
	I0127 11:43:12.914229  398042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3562042.pem && ln -fs /usr/share/ca-certificates/3562042.pem /etc/ssl/certs/3562042.pem"
	I0127 11:43:12.925712  398042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3562042.pem
	I0127 11:43:12.930273  398042 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 10:40 /usr/share/ca-certificates/3562042.pem
	I0127 11:43:12.930337  398042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3562042.pem
	I0127 11:43:12.935941  398042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3562042.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 11:43:12.947403  398042 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 11:43:12.952880  398042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 11:43:12.959192  398042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 11:43:12.964734  398042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 11:43:12.970414  398042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 11:43:12.976211  398042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 11:43:12.982039  398042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 11:43:12.987818  398042 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-259716 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-259716
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:43:12.987937  398042 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0127 11:43:12.987991  398042 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 11:43:13.034301  398042 cri.go:89] found id: "edcd39c3923804a79295490dc6420b27130a6122c50b6b0bf13dd2335885016d"
	I0127 11:43:13.034321  398042 cri.go:89] found id: "211baa3bd3bab1991ada478e347194169049ffba52f6c813990dd826f9161737"
	I0127 11:43:13.034325  398042 cri.go:89] found id: "eacf998e9a2f24ebc90af53759c56046e7f5404dffc8fea4104d8cc612eb4cf2"
	I0127 11:43:13.034328  398042 cri.go:89] found id: "95cfa56a59dc32f1e509762636c5a4ec3908944de13e2475cc702a43631337ba"
	I0127 11:43:13.034331  398042 cri.go:89] found id: "58db671b4469ef3f73b49068606b0c2e694672c852de2339e6ab65809923913a"
	I0127 11:43:13.034334  398042 cri.go:89] found id: "2f42a0741df742584c088ceaf94c57ed61e3fa7c8539ca582ac43053c026fe23"
	I0127 11:43:13.034337  398042 cri.go:89] found id: "6ff7117816af89ae155d3c3b6b76efe713301b0e93b244c89f0e9914d7595c08"
	I0127 11:43:13.034340  398042 cri.go:89] found id: ""
	I0127 11:43:13.034386  398042 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0127 11:43:13.048922  398042 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-27T11:43:13Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0127 11:43:13.048999  398042 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 11:43:13.058526  398042 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 11:43:13.058545  398042 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 11:43:13.058587  398042 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 11:43:13.071017  398042 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 11:43:13.072184  398042 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-259716" does not appear in /home/jenkins/minikube-integration/20319-348858/kubeconfig
	I0127 11:43:13.072884  398042 kubeconfig.go:62] /home/jenkins/minikube-integration/20319-348858/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-259716" cluster setting kubeconfig missing "default-k8s-diff-port-259716" context setting]
	I0127 11:43:13.073787  398042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-348858/kubeconfig: {Name:mk12891275228a2835a35659c2ede45028f0a576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:43:13.075404  398042 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 11:43:13.084901  398042 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.215
	I0127 11:43:13.084937  398042 kubeadm.go:1160] stopping kube-system containers ...
	I0127 11:43:13.084950  398042 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0127 11:43:13.084990  398042 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 11:43:13.131674  398042 cri.go:89] found id: "edcd39c3923804a79295490dc6420b27130a6122c50b6b0bf13dd2335885016d"
	I0127 11:43:13.131695  398042 cri.go:89] found id: "211baa3bd3bab1991ada478e347194169049ffba52f6c813990dd826f9161737"
	I0127 11:43:13.131700  398042 cri.go:89] found id: "eacf998e9a2f24ebc90af53759c56046e7f5404dffc8fea4104d8cc612eb4cf2"
	I0127 11:43:13.131705  398042 cri.go:89] found id: "95cfa56a59dc32f1e509762636c5a4ec3908944de13e2475cc702a43631337ba"
	I0127 11:43:13.131708  398042 cri.go:89] found id: "58db671b4469ef3f73b49068606b0c2e694672c852de2339e6ab65809923913a"
	I0127 11:43:13.131712  398042 cri.go:89] found id: "2f42a0741df742584c088ceaf94c57ed61e3fa7c8539ca582ac43053c026fe23"
	I0127 11:43:13.131715  398042 cri.go:89] found id: "6ff7117816af89ae155d3c3b6b76efe713301b0e93b244c89f0e9914d7595c08"
	I0127 11:43:13.131719  398042 cri.go:89] found id: ""
	I0127 11:43:13.131725  398042 cri.go:252] Stopping containers: [edcd39c3923804a79295490dc6420b27130a6122c50b6b0bf13dd2335885016d 211baa3bd3bab1991ada478e347194169049ffba52f6c813990dd826f9161737 eacf998e9a2f24ebc90af53759c56046e7f5404dffc8fea4104d8cc612eb4cf2 95cfa56a59dc32f1e509762636c5a4ec3908944de13e2475cc702a43631337ba 58db671b4469ef3f73b49068606b0c2e694672c852de2339e6ab65809923913a 2f42a0741df742584c088ceaf94c57ed61e3fa7c8539ca582ac43053c026fe23 6ff7117816af89ae155d3c3b6b76efe713301b0e93b244c89f0e9914d7595c08]
	I0127 11:43:13.131775  398042 ssh_runner.go:195] Run: which crictl
	I0127 11:43:13.136008  398042 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 edcd39c3923804a79295490dc6420b27130a6122c50b6b0bf13dd2335885016d 211baa3bd3bab1991ada478e347194169049ffba52f6c813990dd826f9161737 eacf998e9a2f24ebc90af53759c56046e7f5404dffc8fea4104d8cc612eb4cf2 95cfa56a59dc32f1e509762636c5a4ec3908944de13e2475cc702a43631337ba 58db671b4469ef3f73b49068606b0c2e694672c852de2339e6ab65809923913a 2f42a0741df742584c088ceaf94c57ed61e3fa7c8539ca582ac43053c026fe23 6ff7117816af89ae155d3c3b6b76efe713301b0e93b244c89f0e9914d7595c08
	I0127 11:43:13.170983  398042 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 11:43:13.187865  398042 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:43:13.197606  398042 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:43:13.197625  398042 kubeadm.go:157] found existing configuration files:
	
	I0127 11:43:13.197668  398042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0127 11:43:13.206839  398042 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:43:13.206894  398042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:43:13.216510  398042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0127 11:43:13.226726  398042 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:43:13.226778  398042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:43:13.236516  398042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0127 11:43:13.246080  398042 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:43:13.246125  398042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:43:13.255613  398042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0127 11:43:13.264924  398042 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:43:13.264966  398042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:43:13.275494  398042 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:43:13.286169  398042 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:43:13.446246  398042 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:43:14.319155  398042 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:43:14.546610  398042 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:43:14.643116  398042 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:43:14.765198  398042 api_server.go:52] waiting for apiserver process to appear ...
	I0127 11:43:14.765281  398042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:43:15.265371  398042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:43:15.766327  398042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:43:16.266081  398042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:43:16.300411  398042 api_server.go:72] duration metric: took 1.535210617s to wait for apiserver process to appear ...
	I0127 11:43:16.300447  398042 api_server.go:88] waiting for apiserver healthz status ...
	I0127 11:43:16.300471  398042 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I0127 11:43:16.301000  398042 api_server.go:269] stopped: https://192.168.39.215:8444/healthz: Get "https://192.168.39.215:8444/healthz": dial tcp 192.168.39.215:8444: connect: connection refused
	I0127 11:43:16.801572  398042 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I0127 11:43:18.916483  398042 api_server.go:279] https://192.168.39.215:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 11:43:18.916514  398042 api_server.go:103] status: https://192.168.39.215:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 11:43:18.916532  398042 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I0127 11:43:18.974233  398042 api_server.go:279] https://192.168.39.215:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 11:43:18.974264  398042 api_server.go:103] status: https://192.168.39.215:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 11:43:19.300697  398042 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I0127 11:43:19.305618  398042 api_server.go:279] https://192.168.39.215:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 11:43:19.305646  398042 api_server.go:103] status: https://192.168.39.215:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 11:43:19.800929  398042 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I0127 11:43:19.810235  398042 api_server.go:279] https://192.168.39.215:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 11:43:19.810271  398042 api_server.go:103] status: https://192.168.39.215:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 11:43:20.300886  398042 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I0127 11:43:20.313643  398042 api_server.go:279] https://192.168.39.215:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 11:43:20.313683  398042 api_server.go:103] status: https://192.168.39.215:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 11:43:20.801287  398042 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I0127 11:43:20.806559  398042 api_server.go:279] https://192.168.39.215:8444/healthz returned 200:
	ok
	I0127 11:43:20.816684  398042 api_server.go:141] control plane version: v1.32.1
	I0127 11:43:20.816717  398042 api_server.go:131] duration metric: took 4.516261216s to wait for apiserver health ...
	I0127 11:43:20.816729  398042 cni.go:84] Creating CNI manager for ""
	I0127 11:43:20.816738  398042 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 11:43:20.818051  398042 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 11:43:20.819258  398042 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 11:43:20.830868  398042 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 11:43:20.855413  398042 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 11:43:20.869151  398042 system_pods.go:59] 8 kube-system pods found
	I0127 11:43:20.869177  398042 system_pods.go:61] "coredns-668d6bf9bc-dnjbc" [9ad6bf06-fe52-4f73-8e8a-2b15d715dfac] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 11:43:20.869185  398042 system_pods.go:61] "etcd-default-k8s-diff-port-259716" [f076da0b-9dc8-48eb-bf6b-80267a8046b5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 11:43:20.869192  398042 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-259716" [6a112bc3-7ea7-43ca-9d21-06c19da29820] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 11:43:20.869198  398042 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-259716" [9283c5ff-f1ea-4631-a4eb-958db4e4c825] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 11:43:20.869207  398042 system_pods.go:61] "kube-proxy-7q8qt" [ad37afed-0c1a-4bb3-baf3-460617075c92] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0127 11:43:20.869212  398042 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-259716" [bda9ca6c-33b0-4e09-8eac-deeebb342c49] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 11:43:20.869220  398042 system_pods.go:61] "metrics-server-f79f97bbb-pvlcm" [eafbd250-b920-4ee2-9c22-8d0004bbd9b5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 11:43:20.869229  398042 system_pods.go:61] "storage-provisioner" [1a6ad8dc-7d30-424e-82ef-ceed2a99dbf2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 11:43:20.869237  398042 system_pods.go:74] duration metric: took 13.806486ms to wait for pod list to return data ...
	I0127 11:43:20.869244  398042 node_conditions.go:102] verifying NodePressure condition ...
	I0127 11:43:20.876019  398042 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 11:43:20.876045  398042 node_conditions.go:123] node cpu capacity is 2
	I0127 11:43:20.876055  398042 node_conditions.go:105] duration metric: took 6.803971ms to run NodePressure ...
	I0127 11:43:20.876074  398042 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:43:21.307926  398042 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0127 11:43:21.315489  398042 kubeadm.go:739] kubelet initialised
	I0127 11:43:21.315521  398042 kubeadm.go:740] duration metric: took 7.562496ms waiting for restarted kubelet to initialise ...
	I0127 11:43:21.315536  398042 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:43:21.417423  398042 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-dnjbc" in "kube-system" namespace to be "Ready" ...
	I0127 11:43:23.431479  398042 pod_ready.go:103] pod "coredns-668d6bf9bc-dnjbc" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:25.924700  398042 pod_ready.go:103] pod "coredns-668d6bf9bc-dnjbc" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:28.424233  398042 pod_ready.go:103] pod "coredns-668d6bf9bc-dnjbc" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:30.426554  398042 pod_ready.go:103] pod "coredns-668d6bf9bc-dnjbc" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:32.423110  398042 pod_ready.go:93] pod "coredns-668d6bf9bc-dnjbc" in "kube-system" namespace has status "Ready":"True"
	I0127 11:43:32.423148  398042 pod_ready.go:82] duration metric: took 11.005688792s for pod "coredns-668d6bf9bc-dnjbc" in "kube-system" namespace to be "Ready" ...
	I0127 11:43:32.423162  398042 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-259716" in "kube-system" namespace to be "Ready" ...
	I0127 11:43:32.427248  398042 pod_ready.go:93] pod "etcd-default-k8s-diff-port-259716" in "kube-system" namespace has status "Ready":"True"
	I0127 11:43:32.427268  398042 pod_ready.go:82] duration metric: took 4.096776ms for pod "etcd-default-k8s-diff-port-259716" in "kube-system" namespace to be "Ready" ...
	I0127 11:43:32.427278  398042 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-259716" in "kube-system" namespace to be "Ready" ...
	I0127 11:43:32.431263  398042 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-259716" in "kube-system" namespace has status "Ready":"True"
	I0127 11:43:32.431279  398042 pod_ready.go:82] duration metric: took 3.995178ms for pod "kube-apiserver-default-k8s-diff-port-259716" in "kube-system" namespace to be "Ready" ...
	I0127 11:43:32.431288  398042 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-259716" in "kube-system" namespace to be "Ready" ...
	I0127 11:43:32.435243  398042 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-259716" in "kube-system" namespace has status "Ready":"True"
	I0127 11:43:32.435260  398042 pod_ready.go:82] duration metric: took 3.965541ms for pod "kube-controller-manager-default-k8s-diff-port-259716" in "kube-system" namespace to be "Ready" ...
	I0127 11:43:32.435267  398042 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-7q8qt" in "kube-system" namespace to be "Ready" ...
	I0127 11:43:32.439293  398042 pod_ready.go:93] pod "kube-proxy-7q8qt" in "kube-system" namespace has status "Ready":"True"
	I0127 11:43:32.439304  398042 pod_ready.go:82] duration metric: took 4.032023ms for pod "kube-proxy-7q8qt" in "kube-system" namespace to be "Ready" ...
	I0127 11:43:32.439311  398042 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-259716" in "kube-system" namespace to be "Ready" ...
	I0127 11:43:32.820926  398042 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-259716" in "kube-system" namespace has status "Ready":"True"
	I0127 11:43:32.820946  398042 pod_ready.go:82] duration metric: took 381.628842ms for pod "kube-scheduler-default-k8s-diff-port-259716" in "kube-system" namespace to be "Ready" ...
	I0127 11:43:32.820957  398042 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace to be "Ready" ...
	I0127 11:43:34.832482  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:36.838130  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:39.326527  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:41.327522  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:43.829373  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:45.829911  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:48.327991  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:50.837315  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:53.327783  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:55.829673  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:58.326850  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:00.327567  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:02.327719  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:04.827194  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:06.828278  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:08.830894  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:11.328064  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:13.830280  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:16.327536  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:18.832679  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:21.327007  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:23.328470  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:25.827598  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:27.828722  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:29.830806  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:32.328543  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:34.329095  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:36.828294  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:38.829523  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:40.839983  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:43.327901  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:45.829684  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:48.327486  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:50.832389  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:53.327729  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:55.328334  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:57.827808  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:59.831673  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:02.328293  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:04.828788  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:06.829734  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:09.327705  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:11.827734  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:14.327211  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:16.828022  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:18.829874  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:21.327476  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:23.329186  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:26.000152  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:28.327236  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:30.328717  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:32.829557  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:35.327707  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:37.328040  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:39.328212  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:41.328988  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:43.830562  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:45.831494  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:48.327123  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:50.328044  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:52.828441  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:54.861880  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:57.330075  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:59.827841  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:02.328747  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:04.832017  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:07.329118  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:09.829302  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:12.327351  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:14.328568  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:16.328803  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:18.831910  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:20.832599  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:23.327692  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:25.328388  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:27.829650  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:30.327475  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:32.328143  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:34.329168  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:36.827281  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:38.828146  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:41.325954  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:43.326308  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:45.326964  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:47.328930  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:49.832737  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:51.888716  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:54.328462  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:56.328612  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:58.330098  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:00.829532  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:03.329706  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:05.828992  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:07.830223  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:09.832600  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:12.327795  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:14.830412  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:16.830918  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:19.328023  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:21.329200  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:23.826718  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:25.826825  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:27.827615  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:30.327701  398042 pod_ready.go:103] pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:32.821072  398042 pod_ready.go:82] duration metric: took 4m0.000087372s for pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace to be "Ready" ...
	E0127 11:47:32.821098  398042 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-pvlcm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 11:47:32.821120  398042 pod_ready.go:39] duration metric: took 4m11.505571485s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:47:32.821156  398042 kubeadm.go:597] duration metric: took 4m19.762603796s to restartPrimaryControlPlane
	W0127 11:47:32.821223  398042 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 11:47:32.821258  398042 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 11:47:34.526125  398042 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.704834767s)
	I0127 11:47:34.526218  398042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:47:34.540494  398042 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:47:34.549706  398042 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:47:34.558801  398042 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:47:34.558817  398042 kubeadm.go:157] found existing configuration files:
	
	I0127 11:47:34.558849  398042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0127 11:47:34.567178  398042 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:47:34.567219  398042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:47:34.576647  398042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0127 11:47:34.585299  398042 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:47:34.585347  398042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:47:34.594734  398042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0127 11:47:34.603511  398042 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:47:34.603560  398042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:47:34.612756  398042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0127 11:47:34.623436  398042 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:47:34.623489  398042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:47:34.633000  398042 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 11:47:34.812659  398042 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 11:47:44.177086  398042 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 11:47:44.177174  398042 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 11:47:44.177279  398042 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 11:47:44.177436  398042 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 11:47:44.177594  398042 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 11:47:44.177688  398042 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 11:47:44.179128  398042 out.go:235]   - Generating certificates and keys ...
	I0127 11:47:44.179226  398042 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 11:47:44.179282  398042 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 11:47:44.179389  398042 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 11:47:44.179440  398042 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 11:47:44.179565  398042 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 11:47:44.179658  398042 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 11:47:44.179754  398042 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 11:47:44.179861  398042 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 11:47:44.179974  398042 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 11:47:44.180092  398042 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 11:47:44.180161  398042 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 11:47:44.180248  398042 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 11:47:44.180321  398042 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 11:47:44.180407  398042 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 11:47:44.180487  398042 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 11:47:44.180585  398042 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 11:47:44.180636  398042 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 11:47:44.180731  398042 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 11:47:44.180842  398042 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 11:47:44.182487  398042 out.go:235]   - Booting up control plane ...
	I0127 11:47:44.182593  398042 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 11:47:44.182696  398042 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 11:47:44.182775  398042 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 11:47:44.182925  398042 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 11:47:44.183034  398042 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 11:47:44.183088  398042 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 11:47:44.183235  398042 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 11:47:44.183394  398042 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 11:47:44.183494  398042 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00190418s
	I0127 11:47:44.183558  398042 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 11:47:44.183608  398042 kubeadm.go:310] [api-check] The API server is healthy after 5.502524566s
	I0127 11:47:44.183724  398042 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 11:47:44.183866  398042 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 11:47:44.183944  398042 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 11:47:44.184136  398042 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-259716 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 11:47:44.184201  398042 kubeadm.go:310] [bootstrap-token] Using token: w73ykw.zzyidsyn56buq3ny
	I0127 11:47:44.185350  398042 out.go:235]   - Configuring RBAC rules ...
	I0127 11:47:44.185489  398042 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 11:47:44.185642  398042 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 11:47:44.185810  398042 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 11:47:44.185934  398042 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 11:47:44.186071  398042 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 11:47:44.186190  398042 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 11:47:44.186319  398042 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 11:47:44.186387  398042 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 11:47:44.186464  398042 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 11:47:44.186478  398042 kubeadm.go:310] 
	I0127 11:47:44.186575  398042 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 11:47:44.186584  398042 kubeadm.go:310] 
	I0127 11:47:44.186704  398042 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 11:47:44.186722  398042 kubeadm.go:310] 
	I0127 11:47:44.186772  398042 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 11:47:44.186854  398042 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 11:47:44.186922  398042 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 11:47:44.186931  398042 kubeadm.go:310] 
	I0127 11:47:44.187010  398042 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 11:47:44.187020  398042 kubeadm.go:310] 
	I0127 11:47:44.187073  398042 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 11:47:44.187083  398042 kubeadm.go:310] 
	I0127 11:47:44.187174  398042 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 11:47:44.187293  398042 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 11:47:44.187391  398042 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 11:47:44.187401  398042 kubeadm.go:310] 
	I0127 11:47:44.187479  398042 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 11:47:44.187572  398042 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 11:47:44.187583  398042 kubeadm.go:310] 
	I0127 11:47:44.187723  398042 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token w73ykw.zzyidsyn56buq3ny \
	I0127 11:47:44.187881  398042 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c769a71fa2072963699012a67c9bb4b27b6fc88b52aea51191b7b2189ca81982 \
	I0127 11:47:44.187916  398042 kubeadm.go:310] 	--control-plane 
	I0127 11:47:44.187926  398042 kubeadm.go:310] 
	I0127 11:47:44.188060  398042 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 11:47:44.188073  398042 kubeadm.go:310] 
	I0127 11:47:44.188199  398042 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token w73ykw.zzyidsyn56buq3ny \
	I0127 11:47:44.188380  398042 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c769a71fa2072963699012a67c9bb4b27b6fc88b52aea51191b7b2189ca81982 
	I0127 11:47:44.188402  398042 cni.go:84] Creating CNI manager for ""
	I0127 11:47:44.188412  398042 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 11:47:44.189808  398042 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 11:47:44.191134  398042 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 11:47:44.204892  398042 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 11:47:44.225188  398042 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 11:47:44.225271  398042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:44.225335  398042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-259716 minikube.k8s.io/updated_at=2025_01_27T11_47_44_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=35c230aa12d4986001aef5f6e29069f3bc5493aa minikube.k8s.io/name=default-k8s-diff-port-259716 minikube.k8s.io/primary=true
	I0127 11:47:44.513051  398042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:44.513159  398042 ops.go:34] apiserver oom_adj: -16
	I0127 11:47:45.013977  398042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:45.513821  398042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:46.013883  398042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:46.513560  398042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:47.013948  398042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:47.513902  398042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:48.013995  398042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:48.513786  398042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:48.616081  398042 kubeadm.go:1113] duration metric: took 4.390855676s to wait for elevateKubeSystemPrivileges
	I0127 11:47:48.616128  398042 kubeadm.go:394] duration metric: took 4m35.628318386s to StartCluster
	I0127 11:47:48.616156  398042 settings.go:142] acquiring lock: {Name:mkb277d193c8888d23a77778c65f322a69e59091 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:47:48.616258  398042 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20319-348858/kubeconfig
	I0127 11:47:48.617154  398042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-348858/kubeconfig: {Name:mk12891275228a2835a35659c2ede45028f0a576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:47:48.617408  398042 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.215 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 11:47:48.617559  398042 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 11:47:48.617681  398042 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-259716"
	I0127 11:47:48.617706  398042 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-259716"
	W0127 11:47:48.617718  398042 addons.go:247] addon storage-provisioner should already be in state true
	I0127 11:47:48.617716  398042 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-259716"
	I0127 11:47:48.617729  398042 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-259716"
	I0127 11:47:48.617752  398042 host.go:66] Checking if "default-k8s-diff-port-259716" exists ...
	I0127 11:47:48.617752  398042 config.go:182] Loaded profile config "default-k8s-diff-port-259716": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 11:47:48.617766  398042 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-259716"
	W0127 11:47:48.617861  398042 addons.go:247] addon dashboard should already be in state true
	I0127 11:47:48.617892  398042 host.go:66] Checking if "default-k8s-diff-port-259716" exists ...
	I0127 11:47:48.617747  398042 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-259716"
	I0127 11:47:48.617733  398042 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-259716"
	I0127 11:47:48.618063  398042 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-259716"
	W0127 11:47:48.618080  398042 addons.go:247] addon metrics-server should already be in state true
	I0127 11:47:48.618109  398042 host.go:66] Checking if "default-k8s-diff-port-259716" exists ...
	I0127 11:47:48.618205  398042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:47:48.618247  398042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:48.618400  398042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:47:48.618420  398042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:47:48.618441  398042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:48.618456  398042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:48.618523  398042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:47:48.618543  398042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:48.618796  398042 out.go:177] * Verifying Kubernetes components...
	I0127 11:47:48.620034  398042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:47:48.635754  398042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40859
	I0127 11:47:48.636228  398042 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:48.636983  398042 main.go:141] libmachine: Using API Version  1
	I0127 11:47:48.637015  398042 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:48.637407  398042 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:48.637648  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetState
	I0127 11:47:48.638703  398042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36829
	I0127 11:47:48.639043  398042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41311
	I0127 11:47:48.639258  398042 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:48.640302  398042 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:48.640492  398042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44435
	I0127 11:47:48.640770  398042 main.go:141] libmachine: Using API Version  1
	I0127 11:47:48.640780  398042 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:48.640898  398042 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:48.641295  398042 main.go:141] libmachine: Using API Version  1
	I0127 11:47:48.641304  398042 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:48.641342  398042 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-259716"
	W0127 11:47:48.641377  398042 addons.go:247] addon default-storageclass should already be in state true
	I0127 11:47:48.641412  398042 host.go:66] Checking if "default-k8s-diff-port-259716" exists ...
	I0127 11:47:48.641603  398042 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:48.641841  398042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:47:48.641886  398042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:48.641983  398042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:47:48.642006  398042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:48.642202  398042 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:48.642231  398042 main.go:141] libmachine: Using API Version  1
	I0127 11:47:48.642243  398042 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:48.642537  398042 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:48.642882  398042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:47:48.642921  398042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:48.642927  398042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:47:48.642949  398042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:48.662481  398042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39967
	I0127 11:47:48.662517  398042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36193
	I0127 11:47:48.662635  398042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37081
	I0127 11:47:48.663014  398042 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:48.663141  398042 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:48.663221  398042 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:48.663661  398042 main.go:141] libmachine: Using API Version  1
	I0127 11:47:48.663689  398042 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:48.663785  398042 main.go:141] libmachine: Using API Version  1
	I0127 11:47:48.663812  398042 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:48.663845  398042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45521
	I0127 11:47:48.664052  398042 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:48.664241  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetState
	I0127 11:47:48.664308  398042 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:48.664467  398042 main.go:141] libmachine: Using API Version  1
	I0127 11:47:48.664479  398042 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:48.664917  398042 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:48.665057  398042 main.go:141] libmachine: Using API Version  1
	I0127 11:47:48.665072  398042 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:48.665704  398042 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:48.666111  398042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:47:48.666154  398042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:48.666423  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .DriverName
	I0127 11:47:48.666435  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetState
	I0127 11:47:48.668008  398042 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:48.668244  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .DriverName
	I0127 11:47:48.668347  398042 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:47:48.668711  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetState
	I0127 11:47:48.669490  398042 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 11:47:48.669728  398042 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:47:48.669745  398042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 11:47:48.669764  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHHostname
	I0127 11:47:48.671545  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .DriverName
	I0127 11:47:48.671773  398042 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 11:47:48.672881  398042 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 11:47:48.672900  398042 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 11:47:48.672920  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHHostname
	I0127 11:47:48.673024  398042 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 11:47:48.677694  398042 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 11:47:48.677708  398042 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 11:47:48.677723  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHHostname
	I0127 11:47:48.680832  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:47:48.681665  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:b5:51", ip: ""} in network mk-default-k8s-diff-port-259716: {Iface:virbr1 ExpiryTime:2025-01-27 12:43:03 +0000 UTC Type:0 Mac:52:54:00:d7:b5:51 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-259716 Clientid:01:52:54:00:d7:b5:51}
	I0127 11:47:48.681693  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined IP address 192.168.39.215 and MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:47:48.681724  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:47:48.681907  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHPort
	I0127 11:47:48.682033  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHKeyPath
	I0127 11:47:48.682400  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHUsername
	I0127 11:47:48.682452  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:47:48.682886  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:b5:51", ip: ""} in network mk-default-k8s-diff-port-259716: {Iface:virbr1 ExpiryTime:2025-01-27 12:43:03 +0000 UTC Type:0 Mac:52:54:00:d7:b5:51 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-259716 Clientid:01:52:54:00:d7:b5:51}
	I0127 11:47:48.682909  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined IP address 192.168.39.215 and MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:47:48.682970  398042 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/default-k8s-diff-port-259716/id_rsa Username:docker}
	I0127 11:47:48.682997  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHPort
	I0127 11:47:48.683045  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHPort
	I0127 11:47:48.683096  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:b5:51", ip: ""} in network mk-default-k8s-diff-port-259716: {Iface:virbr1 ExpiryTime:2025-01-27 12:43:03 +0000 UTC Type:0 Mac:52:54:00:d7:b5:51 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-259716 Clientid:01:52:54:00:d7:b5:51}
	I0127 11:47:48.683113  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined IP address 192.168.39.215 and MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:47:48.683291  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHKeyPath
	I0127 11:47:48.683293  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHKeyPath
	I0127 11:47:48.683392  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHUsername
	I0127 11:47:48.683426  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHUsername
	I0127 11:47:48.683553  398042 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/default-k8s-diff-port-259716/id_rsa Username:docker}
	I0127 11:47:48.683788  398042 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/default-k8s-diff-port-259716/id_rsa Username:docker}
	I0127 11:47:48.685629  398042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34405
	I0127 11:47:48.686063  398042 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:48.686484  398042 main.go:141] libmachine: Using API Version  1
	I0127 11:47:48.686503  398042 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:48.686895  398042 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:48.687080  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetState
	I0127 11:47:48.688489  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .DriverName
	I0127 11:47:48.688660  398042 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 11:47:48.688672  398042 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 11:47:48.688684  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHHostname
	I0127 11:47:48.690929  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:47:48.691503  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHPort
	I0127 11:47:48.691578  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:b5:51", ip: ""} in network mk-default-k8s-diff-port-259716: {Iface:virbr1 ExpiryTime:2025-01-27 12:43:03 +0000 UTC Type:0 Mac:52:54:00:d7:b5:51 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:default-k8s-diff-port-259716 Clientid:01:52:54:00:d7:b5:51}
	I0127 11:47:48.691596  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | domain default-k8s-diff-port-259716 has defined IP address 192.168.39.215 and MAC address 52:54:00:d7:b5:51 in network mk-default-k8s-diff-port-259716
	I0127 11:47:48.691637  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHKeyPath
	I0127 11:47:48.691715  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .GetSSHUsername
	I0127 11:47:48.691780  398042 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/default-k8s-diff-port-259716/id_rsa Username:docker}
	I0127 11:47:48.880453  398042 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:47:48.935813  398042 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-259716" to be "Ready" ...
	I0127 11:47:48.975261  398042 node_ready.go:49] node "default-k8s-diff-port-259716" has status "Ready":"True"
	I0127 11:47:48.975349  398042 node_ready.go:38] duration metric: took 39.49508ms for node "default-k8s-diff-port-259716" to be "Ready" ...
	I0127 11:47:48.975379  398042 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:47:48.999104  398042 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:47:49.007421  398042 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 11:47:49.028519  398042 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-qpwkb" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:49.039447  398042 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 11:47:49.039471  398042 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 11:47:49.090650  398042 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 11:47:49.090678  398042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 11:47:49.205105  398042 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 11:47:49.205139  398042 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 11:47:49.235066  398042 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 11:47:49.235092  398042 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 11:47:49.316875  398042 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 11:47:49.316905  398042 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 11:47:49.331480  398042 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 11:47:49.331516  398042 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 11:47:49.440817  398042 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 11:47:49.537906  398042 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 11:47:49.537931  398042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 11:47:49.688750  398042 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 11:47:49.688784  398042 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 11:47:49.768067  398042 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 11:47:49.768097  398042 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 11:47:49.946765  398042 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 11:47:49.946805  398042 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 11:47:50.120987  398042 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 11:47:50.121016  398042 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 11:47:50.161386  398042 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 11:47:50.161421  398042 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 11:47:50.195113  398042 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 11:47:50.427670  398042 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.428518701s)
	I0127 11:47:50.427807  398042 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:50.427709  398042 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.420238938s)
	I0127 11:47:50.427851  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .Close
	I0127 11:47:50.427962  398042 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:50.428027  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .Close
	I0127 11:47:50.428585  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | Closing plugin on server side
	I0127 11:47:50.428633  398042 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:50.428656  398042 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:50.428668  398042 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:50.428676  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .Close
	I0127 11:47:50.428787  398042 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:50.428803  398042 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:50.428812  398042 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:50.428823  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .Close
	I0127 11:47:50.430847  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | Closing plugin on server side
	I0127 11:47:50.430868  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | Closing plugin on server side
	I0127 11:47:50.430898  398042 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:50.430903  398042 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:50.430913  398042 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:50.430913  398042 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:50.450520  398042 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:50.450548  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .Close
	I0127 11:47:50.450875  398042 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:50.450896  398042 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:50.926901  398042 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.486023427s)
	I0127 11:47:50.926977  398042 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:50.927001  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .Close
	I0127 11:47:50.927355  398042 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:50.927386  398042 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:50.927395  398042 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:50.927405  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .Close
	I0127 11:47:50.928552  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | Closing plugin on server side
	I0127 11:47:50.928586  398042 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:50.928604  398042 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:50.928623  398042 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-259716"
	I0127 11:47:51.041790  398042 pod_ready.go:103] pod "coredns-668d6bf9bc-qpwkb" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:51.605270  398042 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.410085298s)
	I0127 11:47:51.605330  398042 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:51.605348  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .Close
	I0127 11:47:51.605755  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) DBG | Closing plugin on server side
	I0127 11:47:51.605771  398042 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:51.605788  398042 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:51.605803  398042 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:51.605812  398042 main.go:141] libmachine: (default-k8s-diff-port-259716) Calling .Close
	I0127 11:47:51.606071  398042 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:51.606086  398042 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:51.607501  398042 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-259716 addons enable metrics-server
	
	I0127 11:47:51.608776  398042 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 11:47:51.610004  398042 addons.go:514] duration metric: took 2.992452149s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 11:47:53.535483  398042 pod_ready.go:103] pod "coredns-668d6bf9bc-qpwkb" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:54.038060  398042 pod_ready.go:93] pod "coredns-668d6bf9bc-qpwkb" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:54.038095  398042 pod_ready.go:82] duration metric: took 5.009545915s for pod "coredns-668d6bf9bc-qpwkb" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:54.038117  398042 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-sqpwt" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:56.044618  398042 pod_ready.go:103] pod "coredns-668d6bf9bc-sqpwt" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:58.045809  398042 pod_ready.go:103] pod "coredns-668d6bf9bc-sqpwt" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:58.545262  398042 pod_ready.go:93] pod "coredns-668d6bf9bc-sqpwt" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:58.545284  398042 pod_ready.go:82] duration metric: took 4.507158878s for pod "coredns-668d6bf9bc-sqpwt" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:58.545294  398042 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-259716" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:58.549356  398042 pod_ready.go:93] pod "etcd-default-k8s-diff-port-259716" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:58.549377  398042 pod_ready.go:82] duration metric: took 4.073186ms for pod "etcd-default-k8s-diff-port-259716" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:58.549387  398042 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-259716" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:58.553961  398042 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-259716" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:58.553978  398042 pod_ready.go:82] duration metric: took 4.586113ms for pod "kube-apiserver-default-k8s-diff-port-259716" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:58.553986  398042 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-259716" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:58.557615  398042 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-259716" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:58.557634  398042 pod_ready.go:82] duration metric: took 3.641551ms for pod "kube-controller-manager-default-k8s-diff-port-259716" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:58.557643  398042 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6r76d" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:58.561281  398042 pod_ready.go:93] pod "kube-proxy-6r76d" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:58.561297  398042 pod_ready.go:82] duration metric: took 3.649467ms for pod "kube-proxy-6r76d" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:58.561304  398042 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-259716" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:58.942607  398042 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-259716" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:58.942633  398042 pod_ready.go:82] duration metric: took 381.32134ms for pod "kube-scheduler-default-k8s-diff-port-259716" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:58.942641  398042 pod_ready.go:39] duration metric: took 9.967244854s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:47:58.942659  398042 api_server.go:52] waiting for apiserver process to appear ...
	I0127 11:47:58.942715  398042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:58.964074  398042 api_server.go:72] duration metric: took 10.34661847s to wait for apiserver process to appear ...
	I0127 11:47:58.964102  398042 api_server.go:88] waiting for apiserver healthz status ...
	I0127 11:47:58.964128  398042 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8444/healthz ...
	I0127 11:47:58.970139  398042 api_server.go:279] https://192.168.39.215:8444/healthz returned 200:
	ok
	I0127 11:47:58.971139  398042 api_server.go:141] control plane version: v1.32.1
	I0127 11:47:58.971167  398042 api_server.go:131] duration metric: took 7.056851ms to wait for apiserver health ...
	I0127 11:47:58.971178  398042 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 11:47:59.149893  398042 system_pods.go:59] 9 kube-system pods found
	I0127 11:47:59.149924  398042 system_pods.go:61] "coredns-668d6bf9bc-qpwkb" [96bd8222-6c6c-415e-b6bb-50719d83328e] Running
	I0127 11:47:59.149943  398042 system_pods.go:61] "coredns-668d6bf9bc-sqpwt" [8fbc37e0-83f4-4128-a797-83ab0a64977e] Running
	I0127 11:47:59.149950  398042 system_pods.go:61] "etcd-default-k8s-diff-port-259716" [20833cda-3830-4a8f-9169-376d1dc178ea] Running
	I0127 11:47:59.149956  398042 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-259716" [0e2e957e-eac6-4e00-8eb1-cb5af7b2550d] Running
	I0127 11:47:59.149961  398042 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-259716" [46d46c25-9b7d-4bdb-be52-c004df232d84] Running
	I0127 11:47:59.149976  398042 system_pods.go:61] "kube-proxy-6r76d" [24ddd36c-e208-43b2-a1b3-7b2606a5f088] Running
	I0127 11:47:59.149981  398042 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-259716" [cfafac8e-d411-4f6a-8cf0-024b8e3521f8] Running
	I0127 11:47:59.149990  398042 system_pods.go:61] "metrics-server-f79f97bbb-h9c6c" [761224b8-f4c1-4607-b17a-2cbad77ba72f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 11:47:59.149996  398042 system_pods.go:61] "storage-provisioner" [f5342d23-2429-419e-8358-c52a3461acb5] Running
	I0127 11:47:59.150009  398042 system_pods.go:74] duration metric: took 178.823941ms to wait for pod list to return data ...
	I0127 11:47:59.150018  398042 default_sa.go:34] waiting for default service account to be created ...
	I0127 11:47:59.345239  398042 default_sa.go:45] found service account: "default"
	I0127 11:47:59.345267  398042 default_sa.go:55] duration metric: took 195.238289ms for default service account to be created ...
	I0127 11:47:59.345276  398042 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 11:47:59.581643  398042 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p default-k8s-diff-port-259716 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-259716 -n default-k8s-diff-port-259716
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-259716 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-259716 logs -n 25: (1.38978863s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-230154 sudo                                | bridge-230154     | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|         | systemctl status kubelet --all                       |                   |         |         |                     |                     |
	|         | --full --no-pager                                    |                   |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo                                | bridge-230154     | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|         | systemctl cat kubelet                                |                   |         |         |                     |                     |
	|         | --no-pager                                           |                   |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo                                | bridge-230154     | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|         | journalctl -xeu kubelet --all                        |                   |         |         |                     |                     |
	|         | --full --no-pager                                    |                   |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo cat                            | bridge-230154     | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                   |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo cat                            | bridge-230154     | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                   |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo                                | bridge-230154     | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC |                     |
	|         | systemctl status docker --all                        |                   |         |         |                     |                     |
	|         | --full --no-pager                                    |                   |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo                                | bridge-230154     | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|         | systemctl cat docker                                 |                   |         |         |                     |                     |
	|         | --no-pager                                           |                   |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo cat                            | bridge-230154     | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|         | /etc/docker/daemon.json                              |                   |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo docker                         | bridge-230154     | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC |                     |
	|         | system info                                          |                   |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo                                | bridge-230154     | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC |                     |
	|         | systemctl status cri-docker                          |                   |         |         |                     |                     |
	|         | --all --full --no-pager                              |                   |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo                                | bridge-230154     | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|         | systemctl cat cri-docker                             |                   |         |         |                     |                     |
	|         | --no-pager                                           |                   |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo cat                            | bridge-230154     | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                   |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo cat                            | bridge-230154     | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                   |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo                                | bridge-230154     | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|         | cri-dockerd --version                                |                   |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo                                | bridge-230154     | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|         | systemctl status containerd                          |                   |         |         |                     |                     |
	|         | --all --full --no-pager                              |                   |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo                                | bridge-230154     | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|         | systemctl cat containerd                             |                   |         |         |                     |                     |
	|         | --no-pager                                           |                   |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo cat                            | bridge-230154     | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|         | /lib/systemd/system/containerd.service               |                   |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo cat                            | bridge-230154     | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|         | /etc/containerd/config.toml                          |                   |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo                                | bridge-230154     | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|         | containerd config dump                               |                   |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo                                | bridge-230154     | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC |                     |
	|         | systemctl status crio --all                          |                   |         |         |                     |                     |
	|         | --full --no-pager                                    |                   |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo                                | bridge-230154     | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|         | systemctl cat crio --no-pager                        |                   |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo find                           | bridge-230154     | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                   |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                   |         |         |                     |                     |
	| ssh     | -p bridge-230154 sudo crio                           | bridge-230154     | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|         | config                                               |                   |         |         |                     |                     |
	| delete  | -p bridge-230154                                     | bridge-230154     | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	| delete  | -p no-preload-976043                                 | no-preload-976043 | jenkins | v1.35.0 | 27 Jan 25 12:08 UTC | 27 Jan 25 12:08 UTC |
	|---------|------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 11:51:47
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 11:51:47.607978  410030 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:51:47.608091  410030 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:51:47.608100  410030 out.go:358] Setting ErrFile to fd 2...
	I0127 11:51:47.608109  410030 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:51:47.608278  410030 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-348858/.minikube/bin
	I0127 11:51:47.608812  410030 out.go:352] Setting JSON to false
	I0127 11:51:47.609953  410030 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9253,"bootTime":1737969455,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 11:51:47.610057  410030 start.go:139] virtualization: kvm guest
	I0127 11:51:47.611895  410030 out.go:177] * [bridge-230154] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 11:51:47.613441  410030 notify.go:220] Checking for updates...
	I0127 11:51:47.613479  410030 out.go:177]   - MINIKUBE_LOCATION=20319
	I0127 11:51:47.614719  410030 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:51:47.615971  410030 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20319-348858/kubeconfig
	I0127 11:51:47.617111  410030 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-348858/.minikube
	I0127 11:51:47.618157  410030 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 11:51:47.619361  410030 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 11:51:47.620941  410030 config.go:182] Loaded profile config "default-k8s-diff-port-259716": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 11:51:47.621061  410030 config.go:182] Loaded profile config "flannel-230154": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 11:51:47.621206  410030 config.go:182] Loaded profile config "no-preload-976043": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 11:51:47.621328  410030 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:51:47.658431  410030 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 11:51:47.659436  410030 start.go:297] selected driver: kvm2
	I0127 11:51:47.659452  410030 start.go:901] validating driver "kvm2" against <nil>
	I0127 11:51:47.659462  410030 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 11:51:47.660244  410030 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:51:47.660346  410030 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20319-348858/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 11:51:47.676075  410030 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 11:51:47.676119  410030 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 11:51:47.676407  410030 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 11:51:47.676445  410030 cni.go:84] Creating CNI manager for "bridge"
	I0127 11:51:47.676456  410030 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 11:51:47.676521  410030 start.go:340] cluster config:
	{Name:bridge-230154 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-230154 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: Net
workPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:51:47.676642  410030 iso.go:125] acquiring lock: {Name:mk6cdd2a3d0bfb3682c1f0c806368944f23c4809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:51:47.677997  410030 out.go:177] * Starting "bridge-230154" primary control-plane node in "bridge-230154" cluster
	I0127 11:51:47.678894  410030 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 11:51:47.678924  410030 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20319-348858/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
	I0127 11:51:47.678936  410030 cache.go:56] Caching tarball of preloaded images
	I0127 11:51:47.679024  410030 preload.go:172] Found /home/jenkins/minikube-integration/20319-348858/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 11:51:47.679037  410030 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
	I0127 11:51:47.679160  410030 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/config.json ...
	I0127 11:51:47.679185  410030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/config.json: {Name:mk2b6cd63816fa28cdffe5707c10ed7a16feb9de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:51:47.679337  410030 start.go:360] acquireMachinesLock for bridge-230154: {Name:mk69dba1a41baeb0794a28159a5cef220370e224 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 11:51:47.679375  410030 start.go:364] duration metric: took 23.748µs to acquireMachinesLock for "bridge-230154"
	I0127 11:51:47.679398  410030 start.go:93] Provisioning new machine with config: &{Name:bridge-230154 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-230154 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 11:51:47.679474  410030 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 11:51:46.323131  408290 pod_ready.go:103] pod "coredns-668d6bf9bc-cxhgb" in "kube-system" namespace has status "Ready":"False"
	I0127 11:51:48.324596  408290 pod_ready.go:103] pod "coredns-668d6bf9bc-cxhgb" in "kube-system" namespace has status "Ready":"False"
	I0127 11:51:47.680780  410030 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0127 11:51:47.680920  410030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:51:47.680961  410030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:51:47.695019  410030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34083
	I0127 11:51:47.695469  410030 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:51:47.696023  410030 main.go:141] libmachine: Using API Version  1
	I0127 11:51:47.696045  410030 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:51:47.696373  410030 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:51:47.696603  410030 main.go:141] libmachine: (bridge-230154) Calling .GetMachineName
	I0127 11:51:47.696816  410030 main.go:141] libmachine: (bridge-230154) Calling .DriverName
	I0127 11:51:47.696969  410030 start.go:159] libmachine.API.Create for "bridge-230154" (driver="kvm2")
	I0127 11:51:47.696999  410030 client.go:168] LocalClient.Create starting
	I0127 11:51:47.697034  410030 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca.pem
	I0127 11:51:47.697071  410030 main.go:141] libmachine: Decoding PEM data...
	I0127 11:51:47.697092  410030 main.go:141] libmachine: Parsing certificate...
	I0127 11:51:47.697163  410030 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20319-348858/.minikube/certs/cert.pem
	I0127 11:51:47.697192  410030 main.go:141] libmachine: Decoding PEM data...
	I0127 11:51:47.697220  410030 main.go:141] libmachine: Parsing certificate...
	I0127 11:51:47.697248  410030 main.go:141] libmachine: Running pre-create checks...
	I0127 11:51:47.697262  410030 main.go:141] libmachine: (bridge-230154) Calling .PreCreateCheck
	I0127 11:51:47.697637  410030 main.go:141] libmachine: (bridge-230154) Calling .GetConfigRaw
	I0127 11:51:47.698098  410030 main.go:141] libmachine: Creating machine...
	I0127 11:51:47.698113  410030 main.go:141] libmachine: (bridge-230154) Calling .Create
	I0127 11:51:47.698255  410030 main.go:141] libmachine: (bridge-230154) creating KVM machine...
	I0127 11:51:47.698270  410030 main.go:141] libmachine: (bridge-230154) creating network...
	I0127 11:51:47.699710  410030 main.go:141] libmachine: (bridge-230154) DBG | found existing default KVM network
	I0127 11:51:47.701093  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:47.700951  410053 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:a9:bc:42} reservation:<nil>}
	I0127 11:51:47.702050  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:47.701955  410053 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:50:a8:75} reservation:<nil>}
	I0127 11:51:47.703137  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:47.703062  410053 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000287220}
	I0127 11:51:47.703226  410030 main.go:141] libmachine: (bridge-230154) DBG | created network xml: 
	I0127 11:51:47.703248  410030 main.go:141] libmachine: (bridge-230154) DBG | <network>
	I0127 11:51:47.703258  410030 main.go:141] libmachine: (bridge-230154) DBG |   <name>mk-bridge-230154</name>
	I0127 11:51:47.703285  410030 main.go:141] libmachine: (bridge-230154) DBG |   <dns enable='no'/>
	I0127 11:51:47.703298  410030 main.go:141] libmachine: (bridge-230154) DBG |   
	I0127 11:51:47.703306  410030 main.go:141] libmachine: (bridge-230154) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0127 11:51:47.703321  410030 main.go:141] libmachine: (bridge-230154) DBG |     <dhcp>
	I0127 11:51:47.703334  410030 main.go:141] libmachine: (bridge-230154) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0127 11:51:47.703345  410030 main.go:141] libmachine: (bridge-230154) DBG |     </dhcp>
	I0127 11:51:47.703361  410030 main.go:141] libmachine: (bridge-230154) DBG |   </ip>
	I0127 11:51:47.703384  410030 main.go:141] libmachine: (bridge-230154) DBG |   
	I0127 11:51:47.703400  410030 main.go:141] libmachine: (bridge-230154) DBG | </network>
	I0127 11:51:47.703410  410030 main.go:141] libmachine: (bridge-230154) DBG | 
	I0127 11:51:47.707961  410030 main.go:141] libmachine: (bridge-230154) DBG | trying to create private KVM network mk-bridge-230154 192.168.61.0/24...
	I0127 11:51:47.780019  410030 main.go:141] libmachine: (bridge-230154) DBG | private KVM network mk-bridge-230154 192.168.61.0/24 created
	I0127 11:51:47.780050  410030 main.go:141] libmachine: (bridge-230154) setting up store path in /home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154 ...
	I0127 11:51:47.780064  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:47.779969  410053 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20319-348858/.minikube
	I0127 11:51:47.780075  410030 main.go:141] libmachine: (bridge-230154) building disk image from file:///home/jenkins/minikube-integration/20319-348858/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 11:51:47.780095  410030 main.go:141] libmachine: (bridge-230154) Downloading /home/jenkins/minikube-integration/20319-348858/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20319-348858/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 11:51:48.077713  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:48.077516  410053 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/id_rsa...
	I0127 11:51:48.209215  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:48.209093  410053 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/bridge-230154.rawdisk...
	I0127 11:51:48.209256  410030 main.go:141] libmachine: (bridge-230154) DBG | Writing magic tar header
	I0127 11:51:48.209272  410030 main.go:141] libmachine: (bridge-230154) DBG | Writing SSH key tar header
	I0127 11:51:48.209286  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:48.209206  410053 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154 ...
	I0127 11:51:48.209303  410030 main.go:141] libmachine: (bridge-230154) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154
	I0127 11:51:48.209343  410030 main.go:141] libmachine: (bridge-230154) setting executable bit set on /home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154 (perms=drwx------)
	I0127 11:51:48.209355  410030 main.go:141] libmachine: (bridge-230154) setting executable bit set on /home/jenkins/minikube-integration/20319-348858/.minikube/machines (perms=drwxr-xr-x)
	I0127 11:51:48.209368  410030 main.go:141] libmachine: (bridge-230154) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20319-348858/.minikube/machines
	I0127 11:51:48.209389  410030 main.go:141] libmachine: (bridge-230154) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20319-348858/.minikube
	I0127 11:51:48.209411  410030 main.go:141] libmachine: (bridge-230154) setting executable bit set on /home/jenkins/minikube-integration/20319-348858/.minikube (perms=drwxr-xr-x)
	I0127 11:51:48.209424  410030 main.go:141] libmachine: (bridge-230154) setting executable bit set on /home/jenkins/minikube-integration/20319-348858 (perms=drwxrwxr-x)
	I0127 11:51:48.209432  410030 main.go:141] libmachine: (bridge-230154) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 11:51:48.209444  410030 main.go:141] libmachine: (bridge-230154) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 11:51:48.209455  410030 main.go:141] libmachine: (bridge-230154) creating domain...
	I0127 11:51:48.209468  410030 main.go:141] libmachine: (bridge-230154) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20319-348858
	I0127 11:51:48.209481  410030 main.go:141] libmachine: (bridge-230154) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 11:51:48.209495  410030 main.go:141] libmachine: (bridge-230154) DBG | checking permissions on dir: /home/jenkins
	I0127 11:51:48.209503  410030 main.go:141] libmachine: (bridge-230154) DBG | checking permissions on dir: /home
	I0127 11:51:48.209510  410030 main.go:141] libmachine: (bridge-230154) DBG | skipping /home - not owner
	I0127 11:51:48.210458  410030 main.go:141] libmachine: (bridge-230154) define libvirt domain using xml: 
	I0127 11:51:48.210486  410030 main.go:141] libmachine: (bridge-230154) <domain type='kvm'>
	I0127 11:51:48.210494  410030 main.go:141] libmachine: (bridge-230154)   <name>bridge-230154</name>
	I0127 11:51:48.210500  410030 main.go:141] libmachine: (bridge-230154)   <memory unit='MiB'>3072</memory>
	I0127 11:51:48.210504  410030 main.go:141] libmachine: (bridge-230154)   <vcpu>2</vcpu>
	I0127 11:51:48.210509  410030 main.go:141] libmachine: (bridge-230154)   <features>
	I0127 11:51:48.210519  410030 main.go:141] libmachine: (bridge-230154)     <acpi/>
	I0127 11:51:48.210526  410030 main.go:141] libmachine: (bridge-230154)     <apic/>
	I0127 11:51:48.210531  410030 main.go:141] libmachine: (bridge-230154)     <pae/>
	I0127 11:51:48.210535  410030 main.go:141] libmachine: (bridge-230154)     
	I0127 11:51:48.210542  410030 main.go:141] libmachine: (bridge-230154)   </features>
	I0127 11:51:48.210549  410030 main.go:141] libmachine: (bridge-230154)   <cpu mode='host-passthrough'>
	I0127 11:51:48.210554  410030 main.go:141] libmachine: (bridge-230154)   
	I0127 11:51:48.210560  410030 main.go:141] libmachine: (bridge-230154)   </cpu>
	I0127 11:51:48.210573  410030 main.go:141] libmachine: (bridge-230154)   <os>
	I0127 11:51:48.210585  410030 main.go:141] libmachine: (bridge-230154)     <type>hvm</type>
	I0127 11:51:48.210590  410030 main.go:141] libmachine: (bridge-230154)     <boot dev='cdrom'/>
	I0127 11:51:48.210595  410030 main.go:141] libmachine: (bridge-230154)     <boot dev='hd'/>
	I0127 11:51:48.210601  410030 main.go:141] libmachine: (bridge-230154)     <bootmenu enable='no'/>
	I0127 11:51:48.210607  410030 main.go:141] libmachine: (bridge-230154)   </os>
	I0127 11:51:48.210612  410030 main.go:141] libmachine: (bridge-230154)   <devices>
	I0127 11:51:48.210617  410030 main.go:141] libmachine: (bridge-230154)     <disk type='file' device='cdrom'>
	I0127 11:51:48.210627  410030 main.go:141] libmachine: (bridge-230154)       <source file='/home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/boot2docker.iso'/>
	I0127 11:51:48.210631  410030 main.go:141] libmachine: (bridge-230154)       <target dev='hdc' bus='scsi'/>
	I0127 11:51:48.210639  410030 main.go:141] libmachine: (bridge-230154)       <readonly/>
	I0127 11:51:48.210643  410030 main.go:141] libmachine: (bridge-230154)     </disk>
	I0127 11:51:48.210666  410030 main.go:141] libmachine: (bridge-230154)     <disk type='file' device='disk'>
	I0127 11:51:48.210688  410030 main.go:141] libmachine: (bridge-230154)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 11:51:48.210711  410030 main.go:141] libmachine: (bridge-230154)       <source file='/home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/bridge-230154.rawdisk'/>
	I0127 11:51:48.210732  410030 main.go:141] libmachine: (bridge-230154)       <target dev='hda' bus='virtio'/>
	I0127 11:51:48.210743  410030 main.go:141] libmachine: (bridge-230154)     </disk>
	I0127 11:51:48.210753  410030 main.go:141] libmachine: (bridge-230154)     <interface type='network'>
	I0127 11:51:48.210760  410030 main.go:141] libmachine: (bridge-230154)       <source network='mk-bridge-230154'/>
	I0127 11:51:48.210767  410030 main.go:141] libmachine: (bridge-230154)       <model type='virtio'/>
	I0127 11:51:48.210780  410030 main.go:141] libmachine: (bridge-230154)     </interface>
	I0127 11:51:48.210787  410030 main.go:141] libmachine: (bridge-230154)     <interface type='network'>
	I0127 11:51:48.210792  410030 main.go:141] libmachine: (bridge-230154)       <source network='default'/>
	I0127 11:51:48.210798  410030 main.go:141] libmachine: (bridge-230154)       <model type='virtio'/>
	I0127 11:51:48.210808  410030 main.go:141] libmachine: (bridge-230154)     </interface>
	I0127 11:51:48.210825  410030 main.go:141] libmachine: (bridge-230154)     <serial type='pty'>
	I0127 11:51:48.210834  410030 main.go:141] libmachine: (bridge-230154)       <target port='0'/>
	I0127 11:51:48.210838  410030 main.go:141] libmachine: (bridge-230154)     </serial>
	I0127 11:51:48.210847  410030 main.go:141] libmachine: (bridge-230154)     <console type='pty'>
	I0127 11:51:48.210858  410030 main.go:141] libmachine: (bridge-230154)       <target type='serial' port='0'/>
	I0127 11:51:48.210867  410030 main.go:141] libmachine: (bridge-230154)     </console>
	I0127 11:51:48.210878  410030 main.go:141] libmachine: (bridge-230154)     <rng model='virtio'>
	I0127 11:51:48.210890  410030 main.go:141] libmachine: (bridge-230154)       <backend model='random'>/dev/random</backend>
	I0127 11:51:48.210898  410030 main.go:141] libmachine: (bridge-230154)     </rng>
	I0127 11:51:48.210903  410030 main.go:141] libmachine: (bridge-230154)     
	I0127 11:51:48.210909  410030 main.go:141] libmachine: (bridge-230154)     
	I0127 11:51:48.210913  410030 main.go:141] libmachine: (bridge-230154)   </devices>
	I0127 11:51:48.210918  410030 main.go:141] libmachine: (bridge-230154) </domain>
	I0127 11:51:48.210926  410030 main.go:141] libmachine: (bridge-230154) 
	I0127 11:51:48.214625  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:37:b6:92 in network default
	I0127 11:51:48.215133  410030 main.go:141] libmachine: (bridge-230154) starting domain...
	I0127 11:51:48.215157  410030 main.go:141] libmachine: (bridge-230154) ensuring networks are active...
	I0127 11:51:48.215168  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:51:48.215860  410030 main.go:141] libmachine: (bridge-230154) Ensuring network default is active
	I0127 11:51:48.216193  410030 main.go:141] libmachine: (bridge-230154) Ensuring network mk-bridge-230154 is active
	I0127 11:51:48.216783  410030 main.go:141] libmachine: (bridge-230154) getting domain XML...
	I0127 11:51:48.217458  410030 main.go:141] libmachine: (bridge-230154) creating domain...
	I0127 11:51:48.569774  410030 main.go:141] libmachine: (bridge-230154) waiting for IP...
	I0127 11:51:48.570778  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:51:48.571317  410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
	I0127 11:51:48.571362  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:48.571309  410053 retry.go:31] will retry after 222.051521ms: waiting for domain to come up
	I0127 11:51:48.794921  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:51:48.795488  410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
	I0127 11:51:48.795532  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:48.795451  410053 retry.go:31] will retry after 300.550406ms: waiting for domain to come up
	I0127 11:51:49.098085  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:51:49.098673  410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
	I0127 11:51:49.098705  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:49.098646  410053 retry.go:31] will retry after 351.204659ms: waiting for domain to come up
	I0127 11:51:49.450989  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:51:49.451523  410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
	I0127 11:51:49.451547  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:49.451503  410053 retry.go:31] will retry after 551.090722ms: waiting for domain to come up
	I0127 11:51:50.003672  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:51:50.004175  410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
	I0127 11:51:50.004220  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:50.004153  410053 retry.go:31] will retry after 550.280324ms: waiting for domain to come up
	I0127 11:51:50.555950  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:51:50.556457  410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
	I0127 11:51:50.556489  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:50.556430  410053 retry.go:31] will retry after 583.250306ms: waiting for domain to come up
	I0127 11:51:51.140978  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:51:51.141558  410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
	I0127 11:51:51.141627  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:51.141533  410053 retry.go:31] will retry after 1.176790151s: waiting for domain to come up
	I0127 11:51:52.320049  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:51:52.320729  410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
	I0127 11:51:52.320797  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:52.320689  410053 retry.go:31] will retry after 1.176590374s: waiting for domain to come up
	I0127 11:51:50.326882  408290 pod_ready.go:103] pod "coredns-668d6bf9bc-cxhgb" in "kube-system" namespace has status "Ready":"False"
	I0127 11:51:52.823007  408290 pod_ready.go:103] pod "coredns-668d6bf9bc-cxhgb" in "kube-system" namespace has status "Ready":"False"
	I0127 11:51:53.498996  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:51:53.499617  410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
	I0127 11:51:53.499644  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:53.499590  410053 retry.go:31] will retry after 1.435449708s: waiting for domain to come up
	I0127 11:51:54.937088  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:51:54.937656  410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
	I0127 11:51:54.937687  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:54.937628  410053 retry.go:31] will retry after 1.670320015s: waiting for domain to come up
	I0127 11:51:56.609490  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:51:56.610076  410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
	I0127 11:51:56.610106  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:56.610030  410053 retry.go:31] will retry after 2.430005713s: waiting for domain to come up
	I0127 11:51:55.322705  408290 pod_ready.go:103] pod "coredns-668d6bf9bc-cxhgb" in "kube-system" namespace has status "Ready":"False"
	I0127 11:51:57.331001  408290 pod_ready.go:103] pod "coredns-668d6bf9bc-cxhgb" in "kube-system" namespace has status "Ready":"False"
	I0127 11:51:59.822867  408290 pod_ready.go:93] pod "coredns-668d6bf9bc-cxhgb" in "kube-system" namespace has status "Ready":"True"
	I0127 11:51:59.822893  408290 pod_ready.go:82] duration metric: took 18.006590764s for pod "coredns-668d6bf9bc-cxhgb" in "kube-system" namespace to be "Ready" ...
	I0127 11:51:59.822903  408290 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-x26ng" in "kube-system" namespace to be "Ready" ...
	I0127 11:51:59.827408  408290 pod_ready.go:93] pod "coredns-668d6bf9bc-x26ng" in "kube-system" namespace has status "Ready":"True"
	I0127 11:51:59.827431  408290 pod_ready.go:82] duration metric: took 4.521822ms for pod "coredns-668d6bf9bc-x26ng" in "kube-system" namespace to be "Ready" ...
	I0127 11:51:59.827439  408290 pod_ready.go:79] waiting up to 15m0s for pod "etcd-flannel-230154" in "kube-system" namespace to be "Ready" ...
	I0127 11:51:59.831731  408290 pod_ready.go:93] pod "etcd-flannel-230154" in "kube-system" namespace has status "Ready":"True"
	I0127 11:51:59.831754  408290 pod_ready.go:82] duration metric: took 4.307302ms for pod "etcd-flannel-230154" in "kube-system" namespace to be "Ready" ...
	I0127 11:51:59.831766  408290 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-flannel-230154" in "kube-system" namespace to be "Ready" ...
	I0127 11:51:59.836455  408290 pod_ready.go:93] pod "kube-apiserver-flannel-230154" in "kube-system" namespace has status "Ready":"True"
	I0127 11:51:59.836476  408290 pod_ready.go:82] duration metric: took 4.701033ms for pod "kube-apiserver-flannel-230154" in "kube-system" namespace to be "Ready" ...
	I0127 11:51:59.836485  408290 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-flannel-230154" in "kube-system" namespace to be "Ready" ...
	I0127 11:51:59.841564  408290 pod_ready.go:93] pod "kube-controller-manager-flannel-230154" in "kube-system" namespace has status "Ready":"True"
	I0127 11:51:59.841607  408290 pod_ready.go:82] duration metric: took 5.114623ms for pod "kube-controller-manager-flannel-230154" in "kube-system" namespace to be "Ready" ...
	I0127 11:51:59.841619  408290 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-fwvhb" in "kube-system" namespace to be "Ready" ...
	I0127 11:52:00.221093  408290 pod_ready.go:93] pod "kube-proxy-fwvhb" in "kube-system" namespace has status "Ready":"True"
	I0127 11:52:00.221117  408290 pod_ready.go:82] duration metric: took 379.489464ms for pod "kube-proxy-fwvhb" in "kube-system" namespace to be "Ready" ...
	I0127 11:52:00.221127  408290 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-flannel-230154" in "kube-system" namespace to be "Ready" ...
	I0127 11:51:59.041589  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:51:59.042126  410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
	I0127 11:51:59.042157  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:59.042094  410053 retry.go:31] will retry after 2.320988246s: waiting for domain to come up
	I0127 11:52:01.364475  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:01.365092  410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
	I0127 11:52:01.365148  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:52:01.365068  410053 retry.go:31] will retry after 4.110080679s: waiting for domain to come up
	I0127 11:52:00.620378  408290 pod_ready.go:93] pod "kube-scheduler-flannel-230154" in "kube-system" namespace has status "Ready":"True"
	I0127 11:52:00.620412  408290 pod_ready.go:82] duration metric: took 399.276857ms for pod "kube-scheduler-flannel-230154" in "kube-system" namespace to be "Ready" ...
	I0127 11:52:00.620423  408290 pod_ready.go:39] duration metric: took 18.811740813s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:52:00.620442  408290 api_server.go:52] waiting for apiserver process to appear ...
	I0127 11:52:00.620509  408290 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:52:00.636203  408290 api_server.go:72] duration metric: took 26.524075024s to wait for apiserver process to appear ...
	I0127 11:52:00.636225  408290 api_server.go:88] waiting for apiserver healthz status ...
	I0127 11:52:00.636241  408290 api_server.go:253] Checking apiserver healthz at https://192.168.50.249:8443/healthz ...
	I0127 11:52:00.640488  408290 api_server.go:279] https://192.168.50.249:8443/healthz returned 200:
	ok
	I0127 11:52:00.641304  408290 api_server.go:141] control plane version: v1.32.1
	I0127 11:52:00.641328  408290 api_server.go:131] duration metric: took 5.095135ms to wait for apiserver health ...
	I0127 11:52:00.641338  408290 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 11:52:00.823404  408290 system_pods.go:59] 8 kube-system pods found
	I0127 11:52:00.823440  408290 system_pods.go:61] "coredns-668d6bf9bc-cxhgb" [1b5c455f-cd3e-4049-ad66-0b5ac83e0cfc] Running
	I0127 11:52:00.823447  408290 system_pods.go:61] "coredns-668d6bf9bc-x26ng" [faddde6c-95bb-43ed-8312-9cb6d1381b76] Running
	I0127 11:52:00.823451  408290 system_pods.go:61] "etcd-flannel-230154" [04cfa9e0-f3d2-4147-a565-73d9a56314be] Running
	I0127 11:52:00.823457  408290 system_pods.go:61] "kube-apiserver-flannel-230154" [b7e45b11-41e6-4471-b69f-ebcfa9fe0b11] Running
	I0127 11:52:00.823460  408290 system_pods.go:61] "kube-controller-manager-flannel-230154" [db9c61ca-4433-474f-b896-bf75b5586aa8] Running
	I0127 11:52:00.823464  408290 system_pods.go:61] "kube-proxy-fwvhb" [c9df58ca-9fda-4b0d-83d3-b0d5771a2b8d] Running
	I0127 11:52:00.823468  408290 system_pods.go:61] "kube-scheduler-flannel-230154" [ef963048-9064-4a1b-8c7c-0b560ac1073e] Running
	I0127 11:52:00.823473  408290 system_pods.go:61] "storage-provisioner" [1d37e577-26fc-4920-addd-4c2b9ea83d4f] Running
	I0127 11:52:00.823480  408290 system_pods.go:74] duration metric: took 182.135829ms to wait for pod list to return data ...
	I0127 11:52:00.823492  408290 default_sa.go:34] waiting for default service account to be created ...
	I0127 11:52:01.019648  408290 default_sa.go:45] found service account: "default"
	I0127 11:52:01.019672  408290 default_sa.go:55] duration metric: took 196.17422ms for default service account to be created ...
	I0127 11:52:01.019680  408290 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 11:52:01.222213  408290 system_pods.go:87] 8 kube-system pods found
	I0127 11:52:05.478491  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:05.479050  410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
	I0127 11:52:05.479075  410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:52:05.479016  410053 retry.go:31] will retry after 3.983085371s: waiting for domain to come up
	I0127 11:52:09.463887  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:09.464547  410030 main.go:141] libmachine: (bridge-230154) found domain IP: 192.168.61.114
	I0127 11:52:09.464572  410030 main.go:141] libmachine: (bridge-230154) reserving static IP address...
	I0127 11:52:09.464581  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has current primary IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:09.464980  410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find host DHCP lease matching {name: "bridge-230154", mac: "52:54:00:79:3a:f7", ip: "192.168.61.114"} in network mk-bridge-230154
	I0127 11:52:09.541183  410030 main.go:141] libmachine: (bridge-230154) reserved static IP address 192.168.61.114 for domain bridge-230154
	I0127 11:52:09.541215  410030 main.go:141] libmachine: (bridge-230154) waiting for SSH...
	I0127 11:52:09.541226  410030 main.go:141] libmachine: (bridge-230154) DBG | Getting to WaitForSSH function...
	I0127 11:52:09.544735  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:09.545125  410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154
	I0127 11:52:09.545156  410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find defined IP address of network mk-bridge-230154 interface with MAC address 52:54:00:79:3a:f7
	I0127 11:52:09.545335  410030 main.go:141] libmachine: (bridge-230154) DBG | Using SSH client type: external
	I0127 11:52:09.545351  410030 main.go:141] libmachine: (bridge-230154) DBG | Using SSH private key: /home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/id_rsa (-rw-------)
	I0127 11:52:09.545396  410030 main.go:141] libmachine: (bridge-230154) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 11:52:09.545409  410030 main.go:141] libmachine: (bridge-230154) DBG | About to run SSH command:
	I0127 11:52:09.545431  410030 main.go:141] libmachine: (bridge-230154) DBG | exit 0
	I0127 11:52:09.549092  410030 main.go:141] libmachine: (bridge-230154) DBG | SSH cmd err, output: exit status 255: 
	I0127 11:52:09.549118  410030 main.go:141] libmachine: (bridge-230154) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0127 11:52:09.549128  410030 main.go:141] libmachine: (bridge-230154) DBG | command : exit 0
	I0127 11:52:09.549141  410030 main.go:141] libmachine: (bridge-230154) DBG | err     : exit status 255
	I0127 11:52:09.549152  410030 main.go:141] libmachine: (bridge-230154) DBG | output  : 
	I0127 11:52:12.550382  410030 main.go:141] libmachine: (bridge-230154) DBG | Getting to WaitForSSH function...
	I0127 11:52:12.552791  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:12.553322  410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
	I0127 11:52:12.553351  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:12.553432  410030 main.go:141] libmachine: (bridge-230154) DBG | Using SSH client type: external
	I0127 11:52:12.553481  410030 main.go:141] libmachine: (bridge-230154) DBG | Using SSH private key: /home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/id_rsa (-rw-------)
	I0127 11:52:12.553525  410030 main.go:141] libmachine: (bridge-230154) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 11:52:12.553539  410030 main.go:141] libmachine: (bridge-230154) DBG | About to run SSH command:
	I0127 11:52:12.553563  410030 main.go:141] libmachine: (bridge-230154) DBG | exit 0
	I0127 11:52:12.681782  410030 main.go:141] libmachine: (bridge-230154) DBG | SSH cmd err, output: <nil>: 
	I0127 11:52:12.682047  410030 main.go:141] libmachine: (bridge-230154) KVM machine creation complete
	I0127 11:52:12.682445  410030 main.go:141] libmachine: (bridge-230154) Calling .GetConfigRaw
	I0127 11:52:12.682967  410030 main.go:141] libmachine: (bridge-230154) Calling .DriverName
	I0127 11:52:12.683184  410030 main.go:141] libmachine: (bridge-230154) Calling .DriverName
	I0127 11:52:12.683394  410030 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 11:52:12.683415  410030 main.go:141] libmachine: (bridge-230154) Calling .GetState
	I0127 11:52:12.684785  410030 main.go:141] libmachine: Detecting operating system of created instance...
	I0127 11:52:12.684823  410030 main.go:141] libmachine: Waiting for SSH to be available...
	I0127 11:52:12.684832  410030 main.go:141] libmachine: Getting to WaitForSSH function...
	I0127 11:52:12.684844  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHHostname
	I0127 11:52:12.687551  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:12.687960  410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
	I0127 11:52:12.687997  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:12.688103  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHPort
	I0127 11:52:12.688306  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
	I0127 11:52:12.688464  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
	I0127 11:52:12.688609  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHUsername
	I0127 11:52:12.688818  410030 main.go:141] libmachine: Using SSH client type: native
	I0127 11:52:12.689070  410030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.114 22 <nil> <nil>}
	I0127 11:52:12.689084  410030 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0127 11:52:12.800827  410030 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 11:52:12.800849  410030 main.go:141] libmachine: Detecting the provisioner...
	I0127 11:52:12.800859  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHHostname
	I0127 11:52:12.803312  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:12.803747  410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
	I0127 11:52:12.803778  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:12.803968  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHPort
	I0127 11:52:12.804181  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
	I0127 11:52:12.804339  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
	I0127 11:52:12.804499  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHUsername
	I0127 11:52:12.804712  410030 main.go:141] libmachine: Using SSH client type: native
	I0127 11:52:12.804930  410030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.114 22 <nil> <nil>}
	I0127 11:52:12.804944  410030 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 11:52:12.922388  410030 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0127 11:52:12.922499  410030 main.go:141] libmachine: found compatible host: buildroot
	I0127 11:52:12.922517  410030 main.go:141] libmachine: Provisioning with buildroot...
	I0127 11:52:12.922528  410030 main.go:141] libmachine: (bridge-230154) Calling .GetMachineName
	I0127 11:52:12.922767  410030 buildroot.go:166] provisioning hostname "bridge-230154"
	I0127 11:52:12.922793  410030 main.go:141] libmachine: (bridge-230154) Calling .GetMachineName
	I0127 11:52:12.922988  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHHostname
	I0127 11:52:12.925557  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:12.925920  410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
	I0127 11:52:12.925951  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:12.926089  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHPort
	I0127 11:52:12.926266  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
	I0127 11:52:12.926402  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
	I0127 11:52:12.926527  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHUsername
	I0127 11:52:12.926642  410030 main.go:141] libmachine: Using SSH client type: native
	I0127 11:52:12.926867  410030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.114 22 <nil> <nil>}
	I0127 11:52:12.926884  410030 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-230154 && echo "bridge-230154" | sudo tee /etc/hostname
	I0127 11:52:13.055349  410030 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-230154
	
	I0127 11:52:13.055376  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHHostname
	I0127 11:52:13.057804  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.058160  410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
	I0127 11:52:13.058184  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.058377  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHPort
	I0127 11:52:13.058583  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
	I0127 11:52:13.058746  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
	I0127 11:52:13.058898  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHUsername
	I0127 11:52:13.059086  410030 main.go:141] libmachine: Using SSH client type: native
	I0127 11:52:13.059305  410030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.114 22 <nil> <nil>}
	I0127 11:52:13.059340  410030 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-230154' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-230154/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-230154' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 11:52:13.182533  410030 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 11:52:13.182574  410030 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20319-348858/.minikube CaCertPath:/home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20319-348858/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20319-348858/.minikube}
	I0127 11:52:13.182607  410030 buildroot.go:174] setting up certificates
	I0127 11:52:13.182618  410030 provision.go:84] configureAuth start
	I0127 11:52:13.182631  410030 main.go:141] libmachine: (bridge-230154) Calling .GetMachineName
	I0127 11:52:13.182846  410030 main.go:141] libmachine: (bridge-230154) Calling .GetIP
	I0127 11:52:13.185388  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.185727  410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
	I0127 11:52:13.185753  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.185888  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHHostname
	I0127 11:52:13.188052  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.188418  410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
	I0127 11:52:13.188451  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.188586  410030 provision.go:143] copyHostCerts
	I0127 11:52:13.188644  410030 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-348858/.minikube/ca.pem, removing ...
	I0127 11:52:13.188668  410030 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-348858/.minikube/ca.pem
	I0127 11:52:13.188770  410030 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20319-348858/.minikube/ca.pem (1082 bytes)
	I0127 11:52:13.188901  410030 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-348858/.minikube/cert.pem, removing ...
	I0127 11:52:13.188912  410030 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-348858/.minikube/cert.pem
	I0127 11:52:13.188951  410030 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20319-348858/.minikube/cert.pem (1123 bytes)
	I0127 11:52:13.189068  410030 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-348858/.minikube/key.pem, removing ...
	I0127 11:52:13.189080  410030 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-348858/.minikube/key.pem
	I0127 11:52:13.189133  410030 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20319-348858/.minikube/key.pem (1679 bytes)
	I0127 11:52:13.189206  410030 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20319-348858/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca-key.pem org=jenkins.bridge-230154 san=[127.0.0.1 192.168.61.114 bridge-230154 localhost minikube]
	I0127 11:52:13.437569  410030 provision.go:177] copyRemoteCerts
	I0127 11:52:13.437657  410030 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 11:52:13.437681  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHHostname
	I0127 11:52:13.440100  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.440463  410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
	I0127 11:52:13.440498  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.440655  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHPort
	I0127 11:52:13.440869  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
	I0127 11:52:13.441020  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHUsername
	I0127 11:52:13.441174  410030 sshutil.go:53] new ssh client: &{IP:192.168.61.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/id_rsa Username:docker}
	I0127 11:52:13.527720  410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 11:52:13.553220  410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0127 11:52:13.577811  410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0127 11:52:13.602562  410030 provision.go:87] duration metric: took 419.926949ms to configureAuth
	I0127 11:52:13.602597  410030 buildroot.go:189] setting minikube options for container-runtime
	I0127 11:52:13.602829  410030 config.go:182] Loaded profile config "bridge-230154": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 11:52:13.602905  410030 main.go:141] libmachine: Checking connection to Docker...
	I0127 11:52:13.602923  410030 main.go:141] libmachine: (bridge-230154) Calling .GetURL
	I0127 11:52:13.604054  410030 main.go:141] libmachine: (bridge-230154) DBG | using libvirt version 6000000
	I0127 11:52:13.606405  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.606734  410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
	I0127 11:52:13.606760  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.606925  410030 main.go:141] libmachine: Docker is up and running!
	I0127 11:52:13.606940  410030 main.go:141] libmachine: Reticulating splines...
	I0127 11:52:13.606947  410030 client.go:171] duration metric: took 25.909938238s to LocalClient.Create
	I0127 11:52:13.606968  410030 start.go:167] duration metric: took 25.909999682s to libmachine.API.Create "bridge-230154"
	I0127 11:52:13.606981  410030 start.go:293] postStartSetup for "bridge-230154" (driver="kvm2")
	I0127 11:52:13.606995  410030 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 11:52:13.607018  410030 main.go:141] libmachine: (bridge-230154) Calling .DriverName
	I0127 11:52:13.607273  410030 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 11:52:13.607302  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHHostname
	I0127 11:52:13.609569  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.609936  410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
	I0127 11:52:13.609966  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.610158  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHPort
	I0127 11:52:13.610355  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
	I0127 11:52:13.610531  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHUsername
	I0127 11:52:13.610640  410030 sshutil.go:53] new ssh client: &{IP:192.168.61.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/id_rsa Username:docker}
	I0127 11:52:13.697284  410030 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 11:52:13.702294  410030 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 11:52:13.702320  410030 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-348858/.minikube/addons for local assets ...
	I0127 11:52:13.702383  410030 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-348858/.minikube/files for local assets ...
	I0127 11:52:13.702495  410030 filesync.go:149] local asset: /home/jenkins/minikube-integration/20319-348858/.minikube/files/etc/ssl/certs/3562042.pem -> 3562042.pem in /etc/ssl/certs
	I0127 11:52:13.702595  410030 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 11:52:13.713272  410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/files/etc/ssl/certs/3562042.pem --> /etc/ssl/certs/3562042.pem (1708 bytes)
	I0127 11:52:13.737044  410030 start.go:296] duration metric: took 130.0485ms for postStartSetup
	I0127 11:52:13.737087  410030 main.go:141] libmachine: (bridge-230154) Calling .GetConfigRaw
	I0127 11:52:13.737687  410030 main.go:141] libmachine: (bridge-230154) Calling .GetIP
	I0127 11:52:13.740135  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.740568  410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
	I0127 11:52:13.740596  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.740857  410030 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/config.json ...
	I0127 11:52:13.741063  410030 start.go:128] duration metric: took 26.061575251s to createHost
	I0127 11:52:13.741091  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHHostname
	I0127 11:52:13.743565  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.743863  410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
	I0127 11:52:13.743892  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.744009  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHPort
	I0127 11:52:13.744178  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
	I0127 11:52:13.744308  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
	I0127 11:52:13.744464  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHUsername
	I0127 11:52:13.744612  410030 main.go:141] libmachine: Using SSH client type: native
	I0127 11:52:13.744775  410030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.114 22 <nil> <nil>}
	I0127 11:52:13.744786  410030 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 11:52:13.858058  410030 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737978733.835977728
	
	I0127 11:52:13.858081  410030 fix.go:216] guest clock: 1737978733.835977728
	I0127 11:52:13.858090  410030 fix.go:229] Guest: 2025-01-27 11:52:13.835977728 +0000 UTC Remote: 2025-01-27 11:52:13.74107788 +0000 UTC m=+26.172194908 (delta=94.899848ms)
	I0127 11:52:13.858112  410030 fix.go:200] guest clock delta is within tolerance: 94.899848ms
	I0127 11:52:13.858119  410030 start.go:83] releasing machines lock for "bridge-230154", held for 26.178731868s
	I0127 11:52:13.858143  410030 main.go:141] libmachine: (bridge-230154) Calling .DriverName
	I0127 11:52:13.858357  410030 main.go:141] libmachine: (bridge-230154) Calling .GetIP
	I0127 11:52:13.860564  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.860972  410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
	I0127 11:52:13.861005  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.861149  410030 main.go:141] libmachine: (bridge-230154) Calling .DriverName
	I0127 11:52:13.861700  410030 main.go:141] libmachine: (bridge-230154) Calling .DriverName
	I0127 11:52:13.861894  410030 main.go:141] libmachine: (bridge-230154) Calling .DriverName
	I0127 11:52:13.861978  410030 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 11:52:13.862037  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHHostname
	I0127 11:52:13.862113  410030 ssh_runner.go:195] Run: cat /version.json
	I0127 11:52:13.862141  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHHostname
	I0127 11:52:13.864536  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.864853  410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
	I0127 11:52:13.864880  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.864898  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.865008  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHPort
	I0127 11:52:13.865191  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
	I0127 11:52:13.865337  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHUsername
	I0127 11:52:13.865370  410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
	I0127 11:52:13.865394  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:13.865518  410030 sshutil.go:53] new ssh client: &{IP:192.168.61.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/id_rsa Username:docker}
	I0127 11:52:13.865598  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHPort
	I0127 11:52:13.865728  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
	I0127 11:52:13.865888  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHUsername
	I0127 11:52:13.866057  410030 sshutil.go:53] new ssh client: &{IP:192.168.61.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/id_rsa Username:docker}
	I0127 11:52:13.965402  410030 ssh_runner.go:195] Run: systemctl --version
	I0127 11:52:13.971806  410030 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 11:52:13.977779  410030 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 11:52:13.977840  410030 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 11:52:13.994427  410030 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 11:52:13.994450  410030 start.go:495] detecting cgroup driver to use...
	I0127 11:52:13.994511  410030 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 11:52:14.024064  410030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 11:52:14.037402  410030 docker.go:217] disabling cri-docker service (if available) ...
	I0127 11:52:14.037442  410030 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 11:52:14.051360  410030 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 11:52:14.064833  410030 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 11:52:14.189820  410030 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 11:52:14.353457  410030 docker.go:233] disabling docker service ...
	I0127 11:52:14.353523  410030 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 11:52:14.368733  410030 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 11:52:14.383491  410030 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 11:52:14.519252  410030 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 11:52:14.653505  410030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 11:52:14.667113  410030 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 11:52:14.686409  410030 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 11:52:14.698227  410030 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 11:52:14.708812  410030 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 11:52:14.708860  410030 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 11:52:14.719554  410030 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 11:52:14.729838  410030 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 11:52:14.740183  410030 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 11:52:14.750883  410030 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 11:52:14.761217  410030 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 11:52:14.771423  410030 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 11:52:14.781773  410030 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 11:52:14.793278  410030 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 11:52:14.804439  410030 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 11:52:14.804483  410030 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 11:52:14.818950  410030 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 11:52:14.829832  410030 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:52:14.959488  410030 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 11:52:14.989337  410030 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0127 11:52:14.989418  410030 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 11:52:14.994828  410030 retry.go:31] will retry after 1.345888224s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0127 11:52:16.341324  410030 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 11:52:16.347230  410030 start.go:563] Will wait 60s for crictl version
	I0127 11:52:16.347291  410030 ssh_runner.go:195] Run: which crictl
	I0127 11:52:16.351193  410030 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 11:52:16.395528  410030 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0127 11:52:16.395651  410030 ssh_runner.go:195] Run: containerd --version
	I0127 11:52:16.423238  410030 ssh_runner.go:195] Run: containerd --version
	I0127 11:52:16.449514  410030 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	I0127 11:52:16.450520  410030 main.go:141] libmachine: (bridge-230154) Calling .GetIP
	I0127 11:52:16.453118  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:16.453477  410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
	I0127 11:52:16.453507  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:16.453734  410030 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0127 11:52:16.458237  410030 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:52:16.472482  410030 kubeadm.go:883] updating cluster {Name:bridge-230154 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-230154 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.114 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 11:52:16.472594  410030 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 11:52:16.472646  410030 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:52:16.504936  410030 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 11:52:16.504987  410030 ssh_runner.go:195] Run: which lz4
	I0127 11:52:16.509417  410030 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 11:52:16.514081  410030 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 11:52:16.514116  410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (398131433 bytes)
	I0127 11:52:18.011626  410030 containerd.go:563] duration metric: took 1.502237089s to copy over tarball
	I0127 11:52:18.011722  410030 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 11:52:20.285505  410030 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.273743353s)
	I0127 11:52:20.285572  410030 containerd.go:570] duration metric: took 2.273906638s to extract the tarball
	I0127 11:52:20.285607  410030 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 11:52:20.324554  410030 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:52:20.445111  410030 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 11:52:20.473323  410030 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:52:20.503997  410030 retry.go:31] will retry after 167.428638ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-27T11:52:20Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0127 11:52:20.672333  410030 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:52:20.709952  410030 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 11:52:20.709981  410030 cache_images.go:84] Images are preloaded, skipping loading
	I0127 11:52:20.709993  410030 kubeadm.go:934] updating node { 192.168.61.114 8443 v1.32.1 containerd true true} ...
	I0127 11:52:20.710125  410030 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-230154 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:bridge-230154 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0127 11:52:20.710197  410030 ssh_runner.go:195] Run: sudo crictl info
	I0127 11:52:20.744967  410030 cni.go:84] Creating CNI manager for "bridge"
	I0127 11:52:20.744998  410030 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 11:52:20.745028  410030 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.114 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-230154 NodeName:bridge-230154 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 11:52:20.745188  410030 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "bridge-230154"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.114"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.114"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 11:52:20.745251  410030 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 11:52:20.756008  410030 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 11:52:20.756057  410030 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 11:52:20.765655  410030 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0127 11:52:20.782155  410030 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 11:52:20.798911  410030 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2309 bytes)
	I0127 11:52:20.816745  410030 ssh_runner.go:195] Run: grep 192.168.61.114	control-plane.minikube.internal$ /etc/hosts
	I0127 11:52:20.820748  410030 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.114	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:52:20.833862  410030 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:52:20.953656  410030 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:52:20.974846  410030 certs.go:68] Setting up /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154 for IP: 192.168.61.114
	I0127 11:52:20.974871  410030 certs.go:194] generating shared ca certs ...
	I0127 11:52:20.974892  410030 certs.go:226] acquiring lock for ca certs: {Name:mkd458666dacb6826c0d92f860c3c2133957f34f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:52:20.975122  410030 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20319-348858/.minikube/ca.key
	I0127 11:52:20.975196  410030 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20319-348858/.minikube/proxy-client-ca.key
	I0127 11:52:20.975212  410030 certs.go:256] generating profile certs ...
	I0127 11:52:20.975305  410030 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/client.key
	I0127 11:52:20.975335  410030 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/client.crt with IP's: []
	I0127 11:52:21.301307  410030 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/client.crt ...
	I0127 11:52:21.301335  410030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/client.crt: {Name:mk56bf4c2bbecfad8654b1b4ec642ad6fec51061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:52:21.301487  410030 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/client.key ...
	I0127 11:52:21.301498  410030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/client.key: {Name:mk552257e0fe7fe2855b6465ed9cf6fdbde292fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:52:21.301600  410030 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/apiserver.key.efd0145a
	I0127 11:52:21.301615  410030 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/apiserver.crt.efd0145a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.114]
	I0127 11:52:21.347405  410030 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/apiserver.crt.efd0145a ...
	I0127 11:52:21.347434  410030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/apiserver.crt.efd0145a: {Name:mk6a6599e29481626e185ed34dee333ec39afdfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:52:21.347596  410030 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/apiserver.key.efd0145a ...
	I0127 11:52:21.347613  410030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/apiserver.key.efd0145a: {Name:mk7efccd9616f59b687d73eb0de97063b6b07fbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:52:21.347712  410030 certs.go:381] copying /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/apiserver.crt.efd0145a -> /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/apiserver.crt
	I0127 11:52:21.347813  410030 certs.go:385] copying /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/apiserver.key.efd0145a -> /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/apiserver.key
	I0127 11:52:21.347892  410030 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/proxy-client.key
	I0127 11:52:21.347914  410030 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/proxy-client.crt with IP's: []
	I0127 11:52:21.603596  410030 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/proxy-client.crt ...
	I0127 11:52:21.603626  410030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/proxy-client.crt: {Name:mk62ae8cb0440216cba0e9b53bb75a82eea68d94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:52:21.603813  410030 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/proxy-client.key ...
	I0127 11:52:21.603851  410030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/proxy-client.key: {Name:mk874150a052e7bf16d1760bcb83588a7d7232ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:52:21.604047  410030 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/356204.pem (1338 bytes)
	W0127 11:52:21.604084  410030 certs.go:480] ignoring /home/jenkins/minikube-integration/20319-348858/.minikube/certs/356204_empty.pem, impossibly tiny 0 bytes
	I0127 11:52:21.604094  410030 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 11:52:21.604127  410030 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca.pem (1082 bytes)
	I0127 11:52:21.604150  410030 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/cert.pem (1123 bytes)
	I0127 11:52:21.604173  410030 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/key.pem (1679 bytes)
	I0127 11:52:21.604208  410030 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-348858/.minikube/files/etc/ssl/certs/3562042.pem (1708 bytes)
	I0127 11:52:21.604922  410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 11:52:21.640478  410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 11:52:21.675198  410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 11:52:21.707991  410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 11:52:21.734067  410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0127 11:52:21.758859  410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 11:52:21.785069  410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 11:52:21.811694  410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 11:52:21.839559  410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 11:52:21.864922  410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/certs/356204.pem --> /usr/share/ca-certificates/356204.pem (1338 bytes)
	I0127 11:52:21.893151  410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/files/etc/ssl/certs/3562042.pem --> /usr/share/ca-certificates/3562042.pem (1708 bytes)
	I0127 11:52:21.918761  410030 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 11:52:21.936954  410030 ssh_runner.go:195] Run: openssl version
	I0127 11:52:21.943412  410030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/356204.pem && ln -fs /usr/share/ca-certificates/356204.pem /etc/ssl/certs/356204.pem"
	I0127 11:52:21.953934  410030 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356204.pem
	I0127 11:52:21.958381  410030 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 10:40 /usr/share/ca-certificates/356204.pem
	I0127 11:52:21.958435  410030 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356204.pem
	I0127 11:52:21.964735  410030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/356204.pem /etc/ssl/certs/51391683.0"
	I0127 11:52:21.976503  410030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3562042.pem && ln -fs /usr/share/ca-certificates/3562042.pem /etc/ssl/certs/3562042.pem"
	I0127 11:52:21.987257  410030 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3562042.pem
	I0127 11:52:21.993575  410030 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 10:40 /usr/share/ca-certificates/3562042.pem
	I0127 11:52:21.993646  410030 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3562042.pem
	I0127 11:52:21.999525  410030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3562042.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 11:52:22.009959  410030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 11:52:22.021429  410030 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:52:22.026427  410030 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:52:22.026475  410030 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:52:22.032448  410030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 11:52:22.043143  410030 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 11:52:22.047488  410030 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 11:52:22.047543  410030 kubeadm.go:392] StartCluster: {Name:bridge-230154 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-230154 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.114 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:52:22.047613  410030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0127 11:52:22.047658  410030 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 11:52:22.086372  410030 cri.go:89] found id: ""
	I0127 11:52:22.086433  410030 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 11:52:22.096728  410030 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:52:22.106517  410030 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:52:22.116214  410030 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:52:22.116231  410030 kubeadm.go:157] found existing configuration files:
	
	I0127 11:52:22.116264  410030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:52:22.125344  410030 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:52:22.125413  410030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:52:22.134811  410030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:52:22.143836  410030 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:52:22.143877  410030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:52:22.153251  410030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:52:22.161993  410030 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:52:22.162078  410030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:52:22.171015  410030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:52:22.179758  410030 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:52:22.179812  410030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:52:22.189014  410030 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 11:52:22.375345  410030 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 11:52:32.209450  410030 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 11:52:32.209522  410030 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 11:52:32.209617  410030 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 11:52:32.209722  410030 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 11:52:32.209830  410030 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 11:52:32.209885  410030 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 11:52:32.211330  410030 out.go:235]   - Generating certificates and keys ...
	I0127 11:52:32.211448  410030 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 11:52:32.211535  410030 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 11:52:32.211635  410030 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 11:52:32.211700  410030 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 11:52:32.211752  410030 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 11:52:32.211795  410030 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 11:52:32.211845  410030 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 11:52:32.211948  410030 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-230154 localhost] and IPs [192.168.61.114 127.0.0.1 ::1]
	I0127 11:52:32.211995  410030 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 11:52:32.212189  410030 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-230154 localhost] and IPs [192.168.61.114 127.0.0.1 ::1]
	I0127 11:52:32.212294  410030 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 11:52:32.212377  410030 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 11:52:32.212435  410030 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 11:52:32.212524  410030 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 11:52:32.212592  410030 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 11:52:32.212643  410030 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 11:52:32.212692  410030 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 11:52:32.212798  410030 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 11:52:32.212898  410030 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 11:52:32.212993  410030 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 11:52:32.213052  410030 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 11:52:32.214270  410030 out.go:235]   - Booting up control plane ...
	I0127 11:52:32.214386  410030 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 11:52:32.214498  410030 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 11:52:32.214590  410030 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 11:52:32.214739  410030 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 11:52:32.214899  410030 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 11:52:32.214967  410030 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 11:52:32.215138  410030 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 11:52:32.215293  410030 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 11:52:32.215402  410030 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001079301s
	I0127 11:52:32.215488  410030 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 11:52:32.215548  410030 kubeadm.go:310] [api-check] The API server is healthy after 4.502067696s
	I0127 11:52:32.215682  410030 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 11:52:32.215799  410030 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 11:52:32.215885  410030 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 11:52:32.216101  410030 kubeadm.go:310] [mark-control-plane] Marking the node bridge-230154 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 11:52:32.216183  410030 kubeadm.go:310] [bootstrap-token] Using token: 3ugidl.t0qx3cfrqpz3s5rm
	I0127 11:52:32.218040  410030 out.go:235]   - Configuring RBAC rules ...
	I0127 11:52:32.218199  410030 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 11:52:32.218297  410030 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 11:52:32.218438  410030 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 11:52:32.218656  410030 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 11:52:32.218778  410030 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 11:52:32.218872  410030 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 11:52:32.219002  410030 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 11:52:32.219065  410030 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 11:52:32.219138  410030 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 11:52:32.219147  410030 kubeadm.go:310] 
	I0127 11:52:32.219229  410030 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 11:52:32.219238  410030 kubeadm.go:310] 
	I0127 11:52:32.219362  410030 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 11:52:32.219371  410030 kubeadm.go:310] 
	I0127 11:52:32.219407  410030 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 11:52:32.219511  410030 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 11:52:32.219596  410030 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 11:52:32.219609  410030 kubeadm.go:310] 
	I0127 11:52:32.219697  410030 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 11:52:32.219711  410030 kubeadm.go:310] 
	I0127 11:52:32.219782  410030 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 11:52:32.219793  410030 kubeadm.go:310] 
	I0127 11:52:32.219869  410030 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 11:52:32.219979  410030 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 11:52:32.220072  410030 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 11:52:32.220081  410030 kubeadm.go:310] 
	I0127 11:52:32.220215  410030 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 11:52:32.220347  410030 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 11:52:32.220359  410030 kubeadm.go:310] 
	I0127 11:52:32.220497  410030 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3ugidl.t0qx3cfrqpz3s5rm \
	I0127 11:52:32.220638  410030 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c769a71fa2072963699012a67c9bb4b27b6fc88b52aea51191b7b2189ca81982 \
	I0127 11:52:32.220670  410030 kubeadm.go:310] 	--control-plane 
	I0127 11:52:32.220679  410030 kubeadm.go:310] 
	I0127 11:52:32.220787  410030 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 11:52:32.220796  410030 kubeadm.go:310] 
	I0127 11:52:32.220902  410030 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3ugidl.t0qx3cfrqpz3s5rm \
	I0127 11:52:32.221064  410030 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c769a71fa2072963699012a67c9bb4b27b6fc88b52aea51191b7b2189ca81982 
	I0127 11:52:32.221079  410030 cni.go:84] Creating CNI manager for "bridge"
	I0127 11:52:32.222330  410030 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 11:52:32.223261  410030 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 11:52:32.235254  410030 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 11:52:32.261938  410030 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 11:52:32.262064  410030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:52:32.262145  410030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-230154 minikube.k8s.io/updated_at=2025_01_27T11_52_32_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=35c230aa12d4986001aef5f6e29069f3bc5493aa minikube.k8s.io/name=bridge-230154 minikube.k8s.io/primary=true
	I0127 11:52:32.280765  410030 ops.go:34] apiserver oom_adj: -16
	I0127 11:52:32.416195  410030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:52:32.916850  410030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:52:33.416903  410030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:52:33.916419  410030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:52:34.417254  410030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:52:34.916570  410030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:52:35.416622  410030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:52:35.916814  410030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:52:36.417150  410030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:52:36.512250  410030 kubeadm.go:1113] duration metric: took 4.250259054s to wait for elevateKubeSystemPrivileges
	I0127 11:52:36.512301  410030 kubeadm.go:394] duration metric: took 14.46476068s to StartCluster
	I0127 11:52:36.512331  410030 settings.go:142] acquiring lock: {Name:mkb277d193c8888d23a77778c65f322a69e59091 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:52:36.512467  410030 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20319-348858/kubeconfig
	I0127 11:52:36.516653  410030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-348858/kubeconfig: {Name:mk12891275228a2835a35659c2ede45028f0a576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:52:36.516976  410030 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 11:52:36.516972  410030 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.114 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 11:52:36.517077  410030 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 11:52:36.517203  410030 addons.go:69] Setting storage-provisioner=true in profile "bridge-230154"
	I0127 11:52:36.517227  410030 addons.go:238] Setting addon storage-provisioner=true in "bridge-230154"
	I0127 11:52:36.517240  410030 config.go:182] Loaded profile config "bridge-230154": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 11:52:36.517270  410030 host.go:66] Checking if "bridge-230154" exists ...
	I0127 11:52:36.517307  410030 addons.go:69] Setting default-storageclass=true in profile "bridge-230154"
	I0127 11:52:36.517328  410030 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-230154"
	I0127 11:52:36.517801  410030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:52:36.517819  410030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:52:36.517855  410030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:52:36.517860  410030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:52:36.519326  410030 out.go:177] * Verifying Kubernetes components...
	I0127 11:52:36.520466  410030 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:52:36.537759  410030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32825
	I0127 11:52:36.538308  410030 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:52:36.538532  410030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39909
	I0127 11:52:36.538955  410030 main.go:141] libmachine: Using API Version  1
	I0127 11:52:36.538984  410030 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:52:36.539060  410030 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:52:36.539411  410030 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:52:36.539558  410030 main.go:141] libmachine: Using API Version  1
	I0127 11:52:36.539581  410030 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:52:36.539945  410030 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:52:36.539986  410030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:52:36.540037  410030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:52:36.540303  410030 main.go:141] libmachine: (bridge-230154) Calling .GetState
	I0127 11:52:36.543982  410030 addons.go:238] Setting addon default-storageclass=true in "bridge-230154"
	I0127 11:52:36.544027  410030 host.go:66] Checking if "bridge-230154" exists ...
	I0127 11:52:36.544408  410030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:52:36.544452  410030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:52:36.557799  410030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38083
	I0127 11:52:36.558329  410030 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:52:36.558879  410030 main.go:141] libmachine: Using API Version  1
	I0127 11:52:36.558897  410030 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:52:36.559224  410030 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:52:36.559412  410030 main.go:141] libmachine: (bridge-230154) Calling .GetState
	I0127 11:52:36.559996  410030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32917
	I0127 11:52:36.560556  410030 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:52:36.561039  410030 main.go:141] libmachine: Using API Version  1
	I0127 11:52:36.561051  410030 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:52:36.561110  410030 main.go:141] libmachine: (bridge-230154) Calling .DriverName
	I0127 11:52:36.561469  410030 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:52:36.561948  410030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:52:36.561991  410030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:52:36.562672  410030 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:52:36.563764  410030 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:52:36.563778  410030 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 11:52:36.563793  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHHostname
	I0127 11:52:36.567499  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:36.568057  410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
	I0127 11:52:36.568077  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:36.568247  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHPort
	I0127 11:52:36.568401  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
	I0127 11:52:36.568577  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHUsername
	I0127 11:52:36.568732  410030 sshutil.go:53] new ssh client: &{IP:192.168.61.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/id_rsa Username:docker}
	I0127 11:52:36.577540  410030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46039
	I0127 11:52:36.578011  410030 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:52:36.578548  410030 main.go:141] libmachine: Using API Version  1
	I0127 11:52:36.578571  410030 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:52:36.578891  410030 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:52:36.579083  410030 main.go:141] libmachine: (bridge-230154) Calling .GetState
	I0127 11:52:36.580470  410030 main.go:141] libmachine: (bridge-230154) Calling .DriverName
	I0127 11:52:36.580638  410030 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 11:52:36.580655  410030 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 11:52:36.580682  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHHostname
	I0127 11:52:36.583026  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:36.583362  410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
	I0127 11:52:36.583391  410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
	I0127 11:52:36.583573  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHPort
	I0127 11:52:36.583748  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
	I0127 11:52:36.583875  410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHUsername
	I0127 11:52:36.584004  410030 sshutil.go:53] new ssh client: &{IP:192.168.61.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/id_rsa Username:docker}
	I0127 11:52:36.919631  410030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:52:36.921628  410030 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:52:36.921644  410030 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 11:52:36.988242  410030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 11:52:38.185164  410030 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.265497157s)
	I0127 11:52:38.185231  410030 main.go:141] libmachine: Making call to close driver server
	I0127 11:52:38.185230  410030 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.263561786s)
	I0127 11:52:38.185246  410030 main.go:141] libmachine: (bridge-230154) Calling .Close
	I0127 11:52:38.185289  410030 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.263612805s)
	I0127 11:52:38.185330  410030 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0127 11:52:38.185372  410030 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.197100952s)
	I0127 11:52:38.185399  410030 main.go:141] libmachine: Making call to close driver server
	I0127 11:52:38.185427  410030 main.go:141] libmachine: (bridge-230154) Calling .Close
	I0127 11:52:38.185562  410030 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:52:38.185597  410030 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:52:38.185609  410030 main.go:141] libmachine: Making call to close driver server
	I0127 11:52:38.185616  410030 main.go:141] libmachine: (bridge-230154) Calling .Close
	I0127 11:52:38.185828  410030 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:52:38.185852  410030 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:52:38.185862  410030 main.go:141] libmachine: Making call to close driver server
	I0127 11:52:38.185868  410030 main.go:141] libmachine: (bridge-230154) Calling .Close
	I0127 11:52:38.186004  410030 main.go:141] libmachine: (bridge-230154) DBG | Closing plugin on server side
	I0127 11:52:38.186048  410030 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:52:38.186069  410030 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:52:38.186075  410030 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:52:38.186079  410030 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:52:38.187011  410030 node_ready.go:35] waiting up to 15m0s for node "bridge-230154" to be "Ready" ...
	I0127 11:52:38.212873  410030 node_ready.go:49] node "bridge-230154" has status "Ready":"True"
	I0127 11:52:38.212905  410030 node_ready.go:38] duration metric: took 25.865633ms for node "bridge-230154" to be "Ready" ...
	I0127 11:52:38.212917  410030 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:52:38.214274  410030 main.go:141] libmachine: Making call to close driver server
	I0127 11:52:38.214298  410030 main.go:141] libmachine: (bridge-230154) Calling .Close
	I0127 11:52:38.214581  410030 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:52:38.214630  410030 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:52:38.214612  410030 main.go:141] libmachine: (bridge-230154) DBG | Closing plugin on server side
	I0127 11:52:38.216008  410030 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0127 11:52:38.216924  410030 addons.go:514] duration metric: took 1.699865075s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0127 11:52:38.224349  410030 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-4c298" in "kube-system" namespace to be "Ready" ...
	I0127 11:52:38.695217  410030 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-230154" context rescaled to 1 replicas
	I0127 11:52:40.231472  410030 pod_ready.go:103] pod "coredns-668d6bf9bc-4c298" in "kube-system" namespace has status "Ready":"False"
	I0127 11:52:42.732355  410030 pod_ready.go:103] pod "coredns-668d6bf9bc-4c298" in "kube-system" namespace has status "Ready":"False"
	I0127 11:52:44.230143  410030 pod_ready.go:98] pod "coredns-668d6bf9bc-4c298" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 11:52:44 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 11:52:36 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 11:52:36 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 11:52:36 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 11:52:36 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.61.114 HostIPs:[{IP:192.168.61
.114}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-01-27 11:52:36 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-01-27 11:52:37 +0000 UTC,FinishedAt:2025-01-27 11:52:43 +0000 UTC,ContainerID:containerd://c98c745d4f9edf1ff917ee47655ca1208c7e4b09a4743c10c5415ed7b2fec8bd,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:containerd://c98c745d4f9edf1ff917ee47655ca1208c7e4b09a4743c10c5415ed7b2fec8bd Started:0xc001b44fd0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001c1e130} {Name:kube-api-access-flxzd MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadO
nly:true RecursiveReadOnly:0xc001c1e140}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0127 11:52:44.230174  410030 pod_ready.go:82] duration metric: took 6.00579922s for pod "coredns-668d6bf9bc-4c298" in "kube-system" namespace to be "Ready" ...
	E0127 11:52:44.230189  410030 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-668d6bf9bc-4c298" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 11:52:44 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 11:52:36 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 11:52:36 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 11:52:36 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 11:52:36 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.6
1.114 HostIPs:[{IP:192.168.61.114}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-01-27 11:52:36 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-01-27 11:52:37 +0000 UTC,FinishedAt:2025-01-27 11:52:43 +0000 UTC,ContainerID:containerd://c98c745d4f9edf1ff917ee47655ca1208c7e4b09a4743c10c5415ed7b2fec8bd,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:containerd://c98c745d4f9edf1ff917ee47655ca1208c7e4b09a4743c10c5415ed7b2fec8bd Started:0xc001b44fd0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001c1e130} {Name:kube-api-access-flxzd MountPath:/var/run/secrets/kuber
netes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001c1e140}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0127 11:52:44.230202  410030 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-pc8xl" in "kube-system" namespace to be "Ready" ...
	I0127 11:52:44.234796  410030 pod_ready.go:93] pod "coredns-668d6bf9bc-pc8xl" in "kube-system" namespace has status "Ready":"True"
	I0127 11:52:44.234815  410030 pod_ready.go:82] duration metric: took 4.604397ms for pod "coredns-668d6bf9bc-pc8xl" in "kube-system" namespace to be "Ready" ...
	I0127 11:52:44.234823  410030 pod_ready.go:79] waiting up to 15m0s for pod "etcd-bridge-230154" in "kube-system" namespace to be "Ready" ...
	I0127 11:52:44.238759  410030 pod_ready.go:93] pod "etcd-bridge-230154" in "kube-system" namespace has status "Ready":"True"
	I0127 11:52:44.238775  410030 pod_ready.go:82] duration metric: took 3.947094ms for pod "etcd-bridge-230154" in "kube-system" namespace to be "Ready" ...
	I0127 11:52:44.238782  410030 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-bridge-230154" in "kube-system" namespace to be "Ready" ...
	I0127 11:52:45.244732  410030 pod_ready.go:93] pod "kube-apiserver-bridge-230154" in "kube-system" namespace has status "Ready":"True"
	I0127 11:52:45.244763  410030 pod_ready.go:82] duration metric: took 1.00597309s for pod "kube-apiserver-bridge-230154" in "kube-system" namespace to be "Ready" ...
	I0127 11:52:45.244778  410030 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-bridge-230154" in "kube-system" namespace to be "Ready" ...
	I0127 11:52:45.249321  410030 pod_ready.go:93] pod "kube-controller-manager-bridge-230154" in "kube-system" namespace has status "Ready":"True"
	I0127 11:52:45.249342  410030 pod_ready.go:82] duration metric: took 4.554992ms for pod "kube-controller-manager-bridge-230154" in "kube-system" namespace to be "Ready" ...
	I0127 11:52:45.249355  410030 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-5xb8t" in "kube-system" namespace to be "Ready" ...
	I0127 11:52:45.428257  410030 pod_ready.go:93] pod "kube-proxy-5xb8t" in "kube-system" namespace has status "Ready":"True"
	I0127 11:52:45.428277  410030 pod_ready.go:82] duration metric: took 178.914707ms for pod "kube-proxy-5xb8t" in "kube-system" namespace to be "Ready" ...
	I0127 11:52:45.428285  410030 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-bridge-230154" in "kube-system" namespace to be "Ready" ...
	I0127 11:52:45.829776  410030 pod_ready.go:93] pod "kube-scheduler-bridge-230154" in "kube-system" namespace has status "Ready":"True"
	I0127 11:52:45.829809  410030 pod_ready.go:82] duration metric: took 401.516042ms for pod "kube-scheduler-bridge-230154" in "kube-system" namespace to be "Ready" ...
	I0127 11:52:45.829824  410030 pod_ready.go:39] duration metric: took 7.616894592s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:52:45.829844  410030 api_server.go:52] waiting for apiserver process to appear ...
	I0127 11:52:45.829909  410030 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:52:45.845203  410030 api_server.go:72] duration metric: took 9.328191567s to wait for apiserver process to appear ...
	I0127 11:52:45.845230  410030 api_server.go:88] waiting for apiserver healthz status ...
	I0127 11:52:45.845249  410030 api_server.go:253] Checking apiserver healthz at https://192.168.61.114:8443/healthz ...
	I0127 11:52:45.849548  410030 api_server.go:279] https://192.168.61.114:8443/healthz returned 200:
	ok
	I0127 11:52:45.850315  410030 api_server.go:141] control plane version: v1.32.1
	I0127 11:52:45.850339  410030 api_server.go:131] duration metric: took 5.10115ms to wait for apiserver health ...
	I0127 11:52:45.850346  410030 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 11:52:46.030070  410030 system_pods.go:59] 7 kube-system pods found
	I0127 11:52:46.030111  410030 system_pods.go:61] "coredns-668d6bf9bc-pc8xl" [45ae809c-52a6-4405-8382-d79f3d6b3e58] Running
	I0127 11:52:46.030120  410030 system_pods.go:61] "etcd-bridge-230154" [64ffad49-a1cc-4273-a76f-27829ec98715] Running
	I0127 11:52:46.030127  410030 system_pods.go:61] "kube-apiserver-bridge-230154" [ef8b8909-ce47-4280-a8ad-1c3dcd14e862] Running
	I0127 11:52:46.030142  410030 system_pods.go:61] "kube-controller-manager-bridge-230154" [c8aff057-390a-474d-9436-1bdcc79bd8de] Running
	I0127 11:52:46.030149  410030 system_pods.go:61] "kube-proxy-5xb8t" [bf62bfa5-b098-442e-b13c-2a041c874c50] Running
	I0127 11:52:46.030159  410030 system_pods.go:61] "kube-scheduler-bridge-230154" [da73974e-b55e-400b-a078-0903bc8b7285] Running
	I0127 11:52:46.030169  410030 system_pods.go:61] "storage-provisioner" [58b2ed51-7586-457e-a455-1a52afbcc2fd] Running
	I0127 11:52:46.030181  410030 system_pods.go:74] duration metric: took 179.827627ms to wait for pod list to return data ...
	I0127 11:52:46.030196  410030 default_sa.go:34] waiting for default service account to be created ...
	I0127 11:52:46.228329  410030 default_sa.go:45] found service account: "default"
	I0127 11:52:46.228364  410030 default_sa.go:55] duration metric: took 198.158482ms for default service account to be created ...
	I0127 11:52:46.228375  410030 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 11:52:46.430997  410030 system_pods.go:87] 7 kube-system pods found
	I0127 11:52:46.630596  410030 system_pods.go:105] "coredns-668d6bf9bc-pc8xl" [45ae809c-52a6-4405-8382-d79f3d6b3e58] Running
	I0127 11:52:46.630617  410030 system_pods.go:105] "etcd-bridge-230154" [64ffad49-a1cc-4273-a76f-27829ec98715] Running
	I0127 11:52:46.630623  410030 system_pods.go:105] "kube-apiserver-bridge-230154" [ef8b8909-ce47-4280-a8ad-1c3dcd14e862] Running
	I0127 11:52:46.630628  410030 system_pods.go:105] "kube-controller-manager-bridge-230154" [c8aff057-390a-474d-9436-1bdcc79bd8de] Running
	I0127 11:52:46.630632  410030 system_pods.go:105] "kube-proxy-5xb8t" [bf62bfa5-b098-442e-b13c-2a041c874c50] Running
	I0127 11:52:46.630636  410030 system_pods.go:105] "kube-scheduler-bridge-230154" [da73974e-b55e-400b-a078-0903bc8b7285] Running
	I0127 11:52:46.630640  410030 system_pods.go:105] "storage-provisioner" [58b2ed51-7586-457e-a455-1a52afbcc2fd] Running
	I0127 11:52:46.630649  410030 system_pods.go:147] duration metric: took 402.266545ms to wait for k8s-apps to be running ...
	I0127 11:52:46.630655  410030 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 11:52:46.630700  410030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:52:46.647032  410030 system_svc.go:56] duration metric: took 16.365202ms WaitForService to wait for kubelet
	I0127 11:52:46.647063  410030 kubeadm.go:582] duration metric: took 10.130054313s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 11:52:46.647088  410030 node_conditions.go:102] verifying NodePressure condition ...
	I0127 11:52:46.828212  410030 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 11:52:46.828240  410030 node_conditions.go:123] node cpu capacity is 2
	I0127 11:52:46.828255  410030 node_conditions.go:105] duration metric: took 181.16132ms to run NodePressure ...
	I0127 11:52:46.828269  410030 start.go:241] waiting for startup goroutines ...
	I0127 11:52:46.828280  410030 start.go:246] waiting for cluster config update ...
	I0127 11:52:46.828295  410030 start.go:255] writing updated cluster config ...
	I0127 11:52:46.828597  410030 ssh_runner.go:195] Run: rm -f paused
	I0127 11:52:46.879719  410030 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 11:52:46.881278  410030 out.go:177] * Done! kubectl is now configured to use "bridge-230154" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	d725ceb84f215       523cad1a4df73       57 seconds ago      Exited              dashboard-metrics-scraper   9                   145201ae25b4d       dashboard-metrics-scraper-86c6bf9756-c8zpk
	21e002c4b27af       07655ddf2eebe       21 minutes ago      Running             kubernetes-dashboard        0                   c8337fb8b13cf       kubernetes-dashboard-7779f9b69b-swzz4
	acf28ac6b45a2       6e38f40d628db       21 minutes ago      Running             storage-provisioner         0                   7bdd77d02b1b7       storage-provisioner
	ff174283f6143       c69fa2e9cbf5f       21 minutes ago      Running             coredns                     0                   ede9f6ac45c8b       coredns-668d6bf9bc-qpwkb
	2e23143f355e6       c69fa2e9cbf5f       21 minutes ago      Running             coredns                     0                   8a52ebe1ce089       coredns-668d6bf9bc-sqpwt
	d9e40f17642c2       e29f9c7391fd9       21 minutes ago      Running             kube-proxy                  0                   589660ec97e4d       kube-proxy-6r76d
	bf7c77f5162cf       a9e7e6b294baf       22 minutes ago      Running             etcd                        2                   65f6436db81e7       etcd-default-k8s-diff-port-259716
	ba8de0d0033dd       019ee182b58e2       22 minutes ago      Running             kube-controller-manager     2                   5fcaffa0d2410       kube-controller-manager-default-k8s-diff-port-259716
	ead1ac76863e3       95c0bda56fc4d       22 minutes ago      Running             kube-apiserver              2                   60575c2564e89       kube-apiserver-default-k8s-diff-port-259716
	e78aff49ea6e8       2b0d6572d062c       22 minutes ago      Running             kube-scheduler              2                   04bac8b1697bb       kube-scheduler-default-k8s-diff-port-259716
	
	
	==> containerd <==
	Jan 27 12:03:41 default-k8s-diff-port-259716 containerd[560]: time="2025-01-27T12:03:41.602692402Z" level=info msg="CreateContainer within sandbox \"145201ae25b4dcd697d645386ae4a9e8a35fd8c25d6af32ffacfc0fe15b48ccb\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,} returns container id \"1ed0b1ee266376c6b79117cd416b33be86c973226aa9771952520b46b98e4f09\""
	Jan 27 12:03:41 default-k8s-diff-port-259716 containerd[560]: time="2025-01-27T12:03:41.604173591Z" level=info msg="StartContainer for \"1ed0b1ee266376c6b79117cd416b33be86c973226aa9771952520b46b98e4f09\""
	Jan 27 12:03:41 default-k8s-diff-port-259716 containerd[560]: time="2025-01-27T12:03:41.692821769Z" level=info msg="StartContainer for \"1ed0b1ee266376c6b79117cd416b33be86c973226aa9771952520b46b98e4f09\" returns successfully"
	Jan 27 12:03:41 default-k8s-diff-port-259716 containerd[560]: time="2025-01-27T12:03:41.736320322Z" level=info msg="shim disconnected" id=1ed0b1ee266376c6b79117cd416b33be86c973226aa9771952520b46b98e4f09 namespace=k8s.io
	Jan 27 12:03:41 default-k8s-diff-port-259716 containerd[560]: time="2025-01-27T12:03:41.736498442Z" level=warning msg="cleaning up after shim disconnected" id=1ed0b1ee266376c6b79117cd416b33be86c973226aa9771952520b46b98e4f09 namespace=k8s.io
	Jan 27 12:03:41 default-k8s-diff-port-259716 containerd[560]: time="2025-01-27T12:03:41.736616090Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 27 12:03:42 default-k8s-diff-port-259716 containerd[560]: time="2025-01-27T12:03:42.278947849Z" level=info msg="RemoveContainer for \"2e82f27588bc23d29ea5b096b7eaea2d78bb8bdd98d4aee1819e22af402cb956\""
	Jan 27 12:03:42 default-k8s-diff-port-259716 containerd[560]: time="2025-01-27T12:03:42.286095041Z" level=info msg="RemoveContainer for \"2e82f27588bc23d29ea5b096b7eaea2d78bb8bdd98d4aee1819e22af402cb956\" returns successfully"
	Jan 27 12:03:45 default-k8s-diff-port-259716 containerd[560]: time="2025-01-27T12:03:45.573014527Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 12:03:45 default-k8s-diff-port-259716 containerd[560]: time="2025-01-27T12:03:45.582741106Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Jan 27 12:03:45 default-k8s-diff-port-259716 containerd[560]: time="2025-01-27T12:03:45.584619900Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Jan 27 12:03:45 default-k8s-diff-port-259716 containerd[560]: time="2025-01-27T12:03:45.584677797Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 27 12:08:42 default-k8s-diff-port-259716 containerd[560]: time="2025-01-27T12:08:42.574698413Z" level=info msg="CreateContainer within sandbox \"145201ae25b4dcd697d645386ae4a9e8a35fd8c25d6af32ffacfc0fe15b48ccb\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,}"
	Jan 27 12:08:42 default-k8s-diff-port-259716 containerd[560]: time="2025-01-27T12:08:42.603720720Z" level=info msg="CreateContainer within sandbox \"145201ae25b4dcd697d645386ae4a9e8a35fd8c25d6af32ffacfc0fe15b48ccb\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,} returns container id \"d725ceb84f215cde1e2ff60139beb5fd51f1bcf07357a4874d9c3ab20f832679\""
	Jan 27 12:08:42 default-k8s-diff-port-259716 containerd[560]: time="2025-01-27T12:08:42.604455804Z" level=info msg="StartContainer for \"d725ceb84f215cde1e2ff60139beb5fd51f1bcf07357a4874d9c3ab20f832679\""
	Jan 27 12:08:42 default-k8s-diff-port-259716 containerd[560]: time="2025-01-27T12:08:42.679156611Z" level=info msg="StartContainer for \"d725ceb84f215cde1e2ff60139beb5fd51f1bcf07357a4874d9c3ab20f832679\" returns successfully"
	Jan 27 12:08:42 default-k8s-diff-port-259716 containerd[560]: time="2025-01-27T12:08:42.744190222Z" level=info msg="shim disconnected" id=d725ceb84f215cde1e2ff60139beb5fd51f1bcf07357a4874d9c3ab20f832679 namespace=k8s.io
	Jan 27 12:08:42 default-k8s-diff-port-259716 containerd[560]: time="2025-01-27T12:08:42.744250518Z" level=warning msg="cleaning up after shim disconnected" id=d725ceb84f215cde1e2ff60139beb5fd51f1bcf07357a4874d9c3ab20f832679 namespace=k8s.io
	Jan 27 12:08:42 default-k8s-diff-port-259716 containerd[560]: time="2025-01-27T12:08:42.744260115Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 27 12:08:42 default-k8s-diff-port-259716 containerd[560]: time="2025-01-27T12:08:42.976435418Z" level=info msg="RemoveContainer for \"1ed0b1ee266376c6b79117cd416b33be86c973226aa9771952520b46b98e4f09\""
	Jan 27 12:08:42 default-k8s-diff-port-259716 containerd[560]: time="2025-01-27T12:08:42.982118694Z" level=info msg="RemoveContainer for \"1ed0b1ee266376c6b79117cd416b33be86c973226aa9771952520b46b98e4f09\" returns successfully"
	Jan 27 12:08:51 default-k8s-diff-port-259716 containerd[560]: time="2025-01-27T12:08:51.573043200Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 12:08:51 default-k8s-diff-port-259716 containerd[560]: time="2025-01-27T12:08:51.582132310Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Jan 27 12:08:51 default-k8s-diff-port-259716 containerd[560]: time="2025-01-27T12:08:51.584362013Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Jan 27 12:08:51 default-k8s-diff-port-259716 containerd[560]: time="2025-01-27T12:08:51.584499976Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [2e23143f355e62fa625e271e3dcd645736c3c36dbc75756e0af5fd3a76dc4968] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [ff174283f61437121617ea3d55a3f4c9a0428593e0e267479f7cb9fd7761d5b3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-259716
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-259716
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35c230aa12d4986001aef5f6e29069f3bc5493aa
	                    minikube.k8s.io/name=default-k8s-diff-port-259716
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T11_47_44_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 11:47:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-259716
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 12:09:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 12:08:20 +0000   Mon, 27 Jan 2025 11:47:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 12:08:20 +0000   Mon, 27 Jan 2025 11:47:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 12:08:20 +0000   Mon, 27 Jan 2025 11:47:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 12:08:20 +0000   Mon, 27 Jan 2025 11:47:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.215
	  Hostname:    default-k8s-diff-port-259716
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 df76d8924c0f4e3fab209ce059d1b2bf
	  System UUID:                df76d892-4c0f-4e3f-ab20-9ce059d1b2bf
	  Boot ID:                    b4c3fdef-f25d-4f35-9b66-9d0aec13101d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-qpwkb                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 coredns-668d6bf9bc-sqpwt                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-default-k8s-diff-port-259716                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-259716             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-259716    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-6r76d                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-259716             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-f79f97bbb-h9c6c                          100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-c8zpk              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-swzz4                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-259716 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-259716 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node default-k8s-diff-port-259716 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-259716 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-259716 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-259716 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-259716 event: Registered Node default-k8s-diff-port-259716 in Controller
	
	
	==> dmesg <==
	[  +0.058734] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043622] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.310736] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Jan27 11:43] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.728061] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.256748] systemd-fstab-generator[483]: Ignoring "noauto" option for root device
	[  +0.056648] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054280] systemd-fstab-generator[495]: Ignoring "noauto" option for root device
	[  +0.179951] systemd-fstab-generator[509]: Ignoring "noauto" option for root device
	[  +0.145489] systemd-fstab-generator[521]: Ignoring "noauto" option for root device
	[  +0.328594] systemd-fstab-generator[552]: Ignoring "noauto" option for root device
	[  +1.091291] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +2.089732] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +1.266159] kauditd_printk_skb: 225 callbacks suppressed
	[  +5.028724] kauditd_printk_skb: 50 callbacks suppressed
	[ +11.430760] kauditd_printk_skb: 70 callbacks suppressed
	[Jan27 11:47] systemd-fstab-generator[3094]: Ignoring "noauto" option for root device
	[  +7.105093] systemd-fstab-generator[3458]: Ignoring "noauto" option for root device
	[  +0.076235] kauditd_printk_skb: 87 callbacks suppressed
	[  +5.424100] systemd-fstab-generator[3589]: Ignoring "noauto" option for root device
	[  +0.101236] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.103196] kauditd_printk_skb: 110 callbacks suppressed
	[  +6.169146] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [bf7c77f5162cf841de3a787157172a0c9d05fe17b03397a9b4351b63795e8b1f] <==
	{"level":"warn","ts":"2025-01-27T11:51:17.658799Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.462164ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T11:51:17.658881Z","caller":"traceutil/trace.go:171","msg":"trace[968350125] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:781; }","duration":"137.632348ms","start":"2025-01-27T11:51:17.521237Z","end":"2025-01-27T11:51:17.658869Z","steps":["trace[968350125] 'range keys from in-memory index tree'  (duration: 137.390182ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T11:51:18.035721Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.569475ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12933938627363201073 > lease_revoke:<id:337e94a796afdf90>","response":"size:28"}
	{"level":"info","ts":"2025-01-27T11:51:18.035824Z","caller":"traceutil/trace.go:171","msg":"trace[1021182730] linearizableReadLoop","detail":"{readStateIndex:839; appliedIndex:838; }","duration":"114.066262ms","start":"2025-01-27T11:51:17.921737Z","end":"2025-01-27T11:51:18.035803Z","steps":["trace[1021182730] 'read index received'  (duration: 23.507µs)","trace[1021182730] 'applied index is now lower than readState.Index'  (duration: 114.041213ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T11:51:18.035963Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.216532ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T11:51:18.036007Z","caller":"traceutil/trace.go:171","msg":"trace[1267879304] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:781; }","duration":"114.305999ms","start":"2025-01-27T11:51:17.921690Z","end":"2025-01-27T11:51:18.035996Z","steps":["trace[1267879304] 'agreement among raft nodes before linearized reading'  (duration: 114.208237ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T11:51:19.753927Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.7115ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T11:51:19.753988Z","caller":"traceutil/trace.go:171","msg":"trace[1826489837] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:782; }","duration":"232.816486ms","start":"2025-01-27T11:51:19.521158Z","end":"2025-01-27T11:51:19.753975Z","steps":["trace[1826489837] 'range keys from in-memory index tree'  (duration: 232.631538ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T11:52:21.966726Z","caller":"traceutil/trace.go:171","msg":"trace[322434366] transaction","detail":"{read_only:false; response_revision:843; number_of_response:1; }","duration":"175.675984ms","start":"2025-01-27T11:52:21.791017Z","end":"2025-01-27T11:52:21.966693Z","steps":["trace[322434366] 'process raft request'  (duration: 175.571682ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T11:52:21.967158Z","caller":"traceutil/trace.go:171","msg":"trace[2010517375] linearizableReadLoop","detail":"{readStateIndex:914; appliedIndex:914; }","duration":"104.90861ms","start":"2025-01-27T11:52:21.862231Z","end":"2025-01-27T11:52:21.967140Z","steps":["trace[2010517375] 'read index received'  (duration: 104.807868ms)","trace[2010517375] 'applied index is now lower than readState.Index'  (duration: 99.415µs)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T11:52:21.967453Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.164769ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/\" range_end:\"/registry/apiregistration.k8s.io/apiservices0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2025-01-27T11:52:21.967498Z","caller":"traceutil/trace.go:171","msg":"trace[186236770] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/; range_end:/registry/apiregistration.k8s.io/apiservices0; response_count:0; response_revision:843; }","duration":"105.289975ms","start":"2025-01-27T11:52:21.862199Z","end":"2025-01-27T11:52:21.967489Z","steps":["trace[186236770] 'agreement among raft nodes before linearized reading'  (duration: 105.121589ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T11:52:22.260203Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.74322ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T11:52:22.260285Z","caller":"traceutil/trace.go:171","msg":"trace[934248351] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:843; }","duration":"124.839346ms","start":"2025-01-27T11:52:22.135424Z","end":"2025-01-27T11:52:22.260264Z","steps":["trace[934248351] 'range keys from in-memory index tree'  (duration: 124.709593ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T11:52:22.261345Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.377352ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T11:52:22.261408Z","caller":"traceutil/trace.go:171","msg":"trace[1821163618] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:843; }","duration":"142.505746ms","start":"2025-01-27T11:52:22.118889Z","end":"2025-01-27T11:52:22.261395Z","steps":["trace[1821163618] 'range keys from in-memory index tree'  (duration: 142.265816ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T11:57:39.479357Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":860}
	{"level":"info","ts":"2025-01-27T11:57:39.525004Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":860,"took":"44.538589ms","hash":3104247496,"current-db-size-bytes":2818048,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2818048,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2025-01-27T11:57:39.525064Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3104247496,"revision":860,"compact-revision":-1}
	{"level":"info","ts":"2025-01-27T12:02:39.486725Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1111}
	{"level":"info","ts":"2025-01-27T12:02:39.493601Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1111,"took":"5.69655ms","hash":776270540,"current-db-size-bytes":2818048,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1724416,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-01-27T12:02:39.494109Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":776270540,"revision":1111,"compact-revision":860}
	{"level":"info","ts":"2025-01-27T12:07:39.493039Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1363}
	{"level":"info","ts":"2025-01-27T12:07:39.497386Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1363,"took":"3.748362ms","hash":2240925596,"current-db-size-bytes":2818048,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1724416,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-01-27T12:07:39.497442Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2240925596,"revision":1363,"compact-revision":1111}
	
	
	==> kernel <==
	 12:09:41 up 26 min,  0 users,  load average: 0.23, 0.20, 0.18
	Linux default-k8s-diff-port-259716 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ead1ac76863e394c456fc020f0c2ba9934b70ff00c86ff6917f7be5a4cfc7090] <==
	I0127 12:05:42.066058       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 12:05:42.067208       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 12:07:41.063904       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:07:41.064195       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0127 12:07:42.066380       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:07:42.066676       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0127 12:07:42.066587       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:07:42.067062       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 12:07:42.067845       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 12:07:42.068982       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 12:08:42.068585       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:08:42.068700       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0127 12:08:42.069667       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:08:42.069897       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 12:08:42.070092       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 12:08:42.071203       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [ba8de0d0033dd5ef290e6afa048d4dea2eaa109f097c1e5f1e2bb4291ed7af48] <==
	E0127 12:04:47.884628       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:04:47.926249       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:05:17.892202       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:05:17.933468       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:05:47.898916       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:05:47.947391       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:06:17.905127       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:06:17.955940       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:06:47.913151       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:06:47.964752       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:07:17.918715       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:07:17.972585       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:07:47.924928       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:07:47.983417       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:08:17.931605       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:08:17.991105       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 12:08:20.061722       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-259716"
	I0127 12:08:42.994274       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="388.582µs"
	I0127 12:08:47.413589       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="125.145µs"
	E0127 12:08:47.939132       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:08:47.998423       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 12:09:05.585948       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="80.033µs"
	I0127 12:09:17.593304       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="55.666µs"
	E0127 12:09:17.944361       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:09:18.005730       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [d9e40f17642c26f0bac651320c920a44ea24d8aa24e5a37c180a25264d96c5dc] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 11:47:49.526011       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 11:47:49.541969       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.215"]
	E0127 11:47:49.542046       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 11:47:49.786256       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 11:47:49.786298       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 11:47:49.786320       1 server_linux.go:170] "Using iptables Proxier"
	I0127 11:47:49.794114       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 11:47:49.794394       1 server.go:497] "Version info" version="v1.32.1"
	I0127 11:47:49.794405       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 11:47:49.796465       1 config.go:199] "Starting service config controller"
	I0127 11:47:49.796502       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 11:47:49.796526       1 config.go:105] "Starting endpoint slice config controller"
	I0127 11:47:49.796529       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 11:47:49.797383       1 config.go:329] "Starting node config controller"
	I0127 11:47:49.797395       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 11:47:49.896943       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 11:47:49.896972       1 shared_informer.go:320] Caches are synced for service config
	I0127 11:47:49.898188       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e78aff49ea6e840dd098f37ef015c41ac1e8219e3f17ff999a2f177448c705e0] <==
	W0127 11:47:41.084228       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0127 11:47:41.086719       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:41.084257       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0127 11:47:41.086762       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:41.084277       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0127 11:47:41.086848       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:41.084307       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 11:47:41.086998       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:41.084368       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0127 11:47:41.087119       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:41.084415       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 11:47:41.087191       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:41.084466       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 11:47:41.087240       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:42.016490       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 11:47:42.016592       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:42.124310       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 11:47:42.124473       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:42.170863       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 11:47:42.170951       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0127 11:47:42.201783       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 11:47:42.202422       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:42.216170       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0127 11:47:42.216279       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 11:47:43.967043       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 12:08:42 default-k8s-diff-port-259716 kubelet[3465]: I0127 12:08:42.972891    3465 scope.go:117] "RemoveContainer" containerID="1ed0b1ee266376c6b79117cd416b33be86c973226aa9771952520b46b98e4f09"
	Jan 27 12:08:42 default-k8s-diff-port-259716 kubelet[3465]: I0127 12:08:42.973250    3465 scope.go:117] "RemoveContainer" containerID="d725ceb84f215cde1e2ff60139beb5fd51f1bcf07357a4874d9c3ab20f832679"
	Jan 27 12:08:42 default-k8s-diff-port-259716 kubelet[3465]: E0127 12:08:42.973385    3465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-c8zpk_kubernetes-dashboard(ef0c7d05-bcf5-4fd9-bb48-7824e1585eaf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-c8zpk" podUID="ef0c7d05-bcf5-4fd9-bb48-7824e1585eaf"
	Jan 27 12:08:43 default-k8s-diff-port-259716 kubelet[3465]: E0127 12:08:43.613955    3465 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 12:08:43 default-k8s-diff-port-259716 kubelet[3465]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 12:08:43 default-k8s-diff-port-259716 kubelet[3465]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 12:08:43 default-k8s-diff-port-259716 kubelet[3465]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 12:08:43 default-k8s-diff-port-259716 kubelet[3465]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 12:08:47 default-k8s-diff-port-259716 kubelet[3465]: I0127 12:08:47.400103    3465 scope.go:117] "RemoveContainer" containerID="d725ceb84f215cde1e2ff60139beb5fd51f1bcf07357a4874d9c3ab20f832679"
	Jan 27 12:08:47 default-k8s-diff-port-259716 kubelet[3465]: E0127 12:08:47.400741    3465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-c8zpk_kubernetes-dashboard(ef0c7d05-bcf5-4fd9-bb48-7824e1585eaf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-c8zpk" podUID="ef0c7d05-bcf5-4fd9-bb48-7824e1585eaf"
	Jan 27 12:08:51 default-k8s-diff-port-259716 kubelet[3465]: E0127 12:08:51.584694    3465 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 27 12:08:51 default-k8s-diff-port-259716 kubelet[3465]: E0127 12:08:51.584769    3465 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 27 12:08:51 default-k8s-diff-port-259716 kubelet[3465]: E0127 12:08:51.585290    3465 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dd5n9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountP
ropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile
:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-f79f97bbb-h9c6c_kube-system(761224b8-f4c1-4607-b17a-2cbad77ba72f): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Jan 27 12:08:51 default-k8s-diff-port-259716 kubelet[3465]: E0127 12:08:51.586617    3465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-h9c6c" podUID="761224b8-f4c1-4607-b17a-2cbad77ba72f"
	Jan 27 12:08:58 default-k8s-diff-port-259716 kubelet[3465]: I0127 12:08:58.570737    3465 scope.go:117] "RemoveContainer" containerID="d725ceb84f215cde1e2ff60139beb5fd51f1bcf07357a4874d9c3ab20f832679"
	Jan 27 12:08:58 default-k8s-diff-port-259716 kubelet[3465]: E0127 12:08:58.570928    3465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-c8zpk_kubernetes-dashboard(ef0c7d05-bcf5-4fd9-bb48-7824e1585eaf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-c8zpk" podUID="ef0c7d05-bcf5-4fd9-bb48-7824e1585eaf"
	Jan 27 12:09:05 default-k8s-diff-port-259716 kubelet[3465]: E0127 12:09:05.571954    3465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-h9c6c" podUID="761224b8-f4c1-4607-b17a-2cbad77ba72f"
	Jan 27 12:09:10 default-k8s-diff-port-259716 kubelet[3465]: I0127 12:09:10.571526    3465 scope.go:117] "RemoveContainer" containerID="d725ceb84f215cde1e2ff60139beb5fd51f1bcf07357a4874d9c3ab20f832679"
	Jan 27 12:09:10 default-k8s-diff-port-259716 kubelet[3465]: E0127 12:09:10.571755    3465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-c8zpk_kubernetes-dashboard(ef0c7d05-bcf5-4fd9-bb48-7824e1585eaf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-c8zpk" podUID="ef0c7d05-bcf5-4fd9-bb48-7824e1585eaf"
	Jan 27 12:09:17 default-k8s-diff-port-259716 kubelet[3465]: E0127 12:09:17.572519    3465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-h9c6c" podUID="761224b8-f4c1-4607-b17a-2cbad77ba72f"
	Jan 27 12:09:22 default-k8s-diff-port-259716 kubelet[3465]: I0127 12:09:22.571502    3465 scope.go:117] "RemoveContainer" containerID="d725ceb84f215cde1e2ff60139beb5fd51f1bcf07357a4874d9c3ab20f832679"
	Jan 27 12:09:22 default-k8s-diff-port-259716 kubelet[3465]: E0127 12:09:22.571813    3465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-c8zpk_kubernetes-dashboard(ef0c7d05-bcf5-4fd9-bb48-7824e1585eaf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-c8zpk" podUID="ef0c7d05-bcf5-4fd9-bb48-7824e1585eaf"
	Jan 27 12:09:29 default-k8s-diff-port-259716 kubelet[3465]: E0127 12:09:29.572488    3465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-h9c6c" podUID="761224b8-f4c1-4607-b17a-2cbad77ba72f"
	Jan 27 12:09:37 default-k8s-diff-port-259716 kubelet[3465]: I0127 12:09:37.571138    3465 scope.go:117] "RemoveContainer" containerID="d725ceb84f215cde1e2ff60139beb5fd51f1bcf07357a4874d9c3ab20f832679"
	Jan 27 12:09:37 default-k8s-diff-port-259716 kubelet[3465]: E0127 12:09:37.571314    3465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-c8zpk_kubernetes-dashboard(ef0c7d05-bcf5-4fd9-bb48-7824e1585eaf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-c8zpk" podUID="ef0c7d05-bcf5-4fd9-bb48-7824e1585eaf"
	
	
	==> kubernetes-dashboard [21e002c4b27afeec3ab4e2f860dbaaa9da6f8721fe7ae33039c2e0e5e64db255] <==
	2025/01/27 11:57:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 11:58:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 11:58:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 11:59:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 11:59:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:00:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:00:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:01:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:01:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:02:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:02:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:03:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:03:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:04:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:04:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:05:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:05:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:06:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:06:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:07:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:07:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:08:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:08:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:09:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:09:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [acf28ac6b45a2c799e899d5467056478b9fc1e6009d00388657f1367600dbaa2] <==
	I0127 11:47:51.229956       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 11:47:51.349794       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 11:47:51.349856       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 11:47:51.370888       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 11:47:51.375618       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-259716_d12a95d4-53f5-482a-a88e-e0c3e8ea7b0f!
	I0127 11:47:51.381293       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"10fd8a68-a89d-4846-a878-4be01822905c", APIVersion:"v1", ResourceVersion:"451", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-259716_d12a95d4-53f5-482a-a88e-e0c3e8ea7b0f became leader
	I0127 11:47:51.478752       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-259716_d12a95d4-53f5-482a-a88e-e0c3e8ea7b0f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-259716 -n default-k8s-diff-port-259716
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-259716 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-h9c6c
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-259716 describe pod metrics-server-f79f97bbb-h9c6c
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-259716 describe pod metrics-server-f79f97bbb-h9c6c: exit status 1 (61.147039ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-h9c6c" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-259716 describe pod metrics-server-f79f97bbb-h9c6c: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (1618.66s)
E0127 12:10:06.069870  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/kindnet-230154/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:10:13.603036  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/old-k8s-version-705124/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:10:20.228110  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/custom-flannel-230154/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:10:22.857460  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:10:22.863782  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:10:22.875059  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:10:22.896353  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:10:22.937622  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:10:23.019077  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:10:23.180530  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:10:23.502210  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:10:24.144151  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:10:25.425708  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:10:27.987717  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:10:33.109546  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:10:43.351189  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:10:44.114503  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:10:46.316989  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/calico-230154/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:11:02.764770  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/default-k8s-diff-port-259716/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:11:02.771087  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/default-k8s-diff-port-259716/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:11:02.782362  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/default-k8s-diff-port-259716/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:11:02.803652  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/default-k8s-diff-port-259716/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:11:02.845131  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/default-k8s-diff-port-259716/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:11:02.926539  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/default-k8s-diff-port-259716/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:11:03.088086  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/default-k8s-diff-port-259716/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:11:03.409740  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/default-k8s-diff-port-259716/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:11:03.832584  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:11:04.051031  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/default-k8s-diff-port-259716/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:11:05.332828  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/default-k8s-diff-port-259716/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:11:07.895018  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/default-k8s-diff-port-259716/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:11:13.016437  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/default-k8s-diff-port-259716/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:11:21.729078  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/enable-default-cni-230154/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:11:23.258680  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/default-k8s-diff-port-259716/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:11:25.911857  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/functional-430173/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:11:43.289555  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/custom-flannel-230154/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:11:43.740637  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/default-k8s-diff-port-259716/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:11:44.794384  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:12:19.061806  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/auto-230154/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:12:24.702736  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/default-k8s-diff-port-259716/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:12:44.792893  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/enable-default-cni-230154/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:12:47.328766  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:13:06.715757  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:13:22.836984  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/functional-430173/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:13:43.006795  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/kindnet-230154/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:13:46.626214  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/default-k8s-diff-port-259716/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:14:10.390689  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:14:23.251117  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/calico-230154/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:15:13.603810  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/old-k8s-version-705124/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:15:20.227170  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/custom-flannel-230154/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:15:22.856657  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:15:44.114731  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:15:50.557721  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:16:02.764792  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/default-k8s-diff-port-259716/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:16:21.729675  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/enable-default-cni-230154/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:16:30.468319  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/default-k8s-diff-port-259716/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:17:19.062494  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/auto-230154/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:17:47.327013  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:18:16.673429  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/old-k8s-version-705124/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:18:22.837216  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/functional-430173/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:18:43.006609  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/kindnet-230154/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:19:23.250913  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/calico-230154/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:20:13.603004  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/old-k8s-version-705124/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:20:20.227383  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/custom-flannel-230154/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:20:22.856996  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:20:27.191123  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:20:44.114328  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:21:02.763895  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/default-k8s-diff-port-259716/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:21:21.729768  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/enable-default-cni-230154/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:22:19.061877  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/auto-230154/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:22:47.329279  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:23:22.837311  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/functional-430173/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:23:43.007487  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/kindnet-230154/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:24:23.251463  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/calico-230154/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:25:13.603115  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/old-k8s-version-705124/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:25:20.227130  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/custom-flannel-230154/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:25:22.129489  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/auto-230154/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:25:22.856939  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:25:44.114421  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:26:02.764242  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/default-k8s-diff-port-259716/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:26:21.729004  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/enable-default-cni-230154/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:26:45.920082  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:26:46.071601  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/kindnet-230154/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:27:19.062099  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/auto-230154/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:27:25.830292  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/default-k8s-diff-port-259716/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:27:26.319135  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/calico-230154/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:27:47.327155  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:28:05.914045  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/functional-430173/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:28:22.837021  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/functional-430173/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:28:23.291277  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/custom-flannel-230154/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:28:43.007472  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/kindnet-230154/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:23.251068  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/calico-230154/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:24.795352  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/enable-default-cni-230154/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:30:13.603043  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/old-k8s-version-705124/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:30:20.227359  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/custom-flannel-230154/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:30:22.856791  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:30:44.114391  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:30:50.392115  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:02.764449  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/default-k8s-diff-port-259716/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:21.729638  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/enable-default-cni-230154/client.crt: no such file or directory" logger="UnhandledError"
panic: test timed out after 2h0m0s
	running tests:
		TestNetworkPlugins (59m34s)
		TestNetworkPlugins/group/flannel (41m17s)
		TestNetworkPlugins/group/flannel/Start (41m17s)

                                                
                                                
goroutine 3876 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2373 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive, 54 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc000272340, 0xc000745bc8)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
testing.runTests(0xc000900108, {0x52c0340, 0x2b, 0x2b}, {0xffffffffffffffff?, 0x4113b0?, 0x52e6760?})
	/usr/local/go/src/testing/testing.go:2166 +0x43d
testing.(*M).Run(0xc00070bc20)
	/usr/local/go/src/testing/testing.go:2034 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc00070bc20)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0xa8

                                                
                                                
goroutine 3050 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x39581e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3066
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 769 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x39581e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 721
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3639 [chan receive, 42 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0020bc580, 0xc000088230)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3621
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3667 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3961ef0, 0xc000088230}, 0xc001a46f50, 0xc0000d1f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3961ef0, 0xc000088230}, 0x7?, 0xc001a46f50, 0xc001a46f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3961ef0?, 0xc000088230?}, 0xc001ee29c0?, 0x559fe0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001a46fd0?, 0x594424?, 0xc00001b440?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3639
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 217 [select, 115 minutes]:
net/http.(*persistConn).readLoop(0xc001824120)
	/usr/local/go/src/net/http/transport.go:2325 +0xca5
created by net/http.(*Transport).dialConn in goroutine 247
	/usr/local/go/src/net/http/transport.go:1874 +0x154f

                                                
                                                
goroutine 1098 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc001bf4180, 0xc0018fb5e0)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 752
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 107 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000922f50, 0x2e)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0000d2d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x397df80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000923040)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00069a010, {0x3928fe0, 0xc00138a000}, 0x1, 0xc000088230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00069a010, 0x3b9aca00, 0x0, 0x1, 0xc000088230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 148
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2053 [chan receive, 60 minutes]:
testing.(*T).Run(0xc001410340, {0x2c46801?, 0x55981c?}, 0xc001b82fa8)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc001410340)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:52 +0xd3
testing.tRunner(0xc001410340, 0x35e9da0)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2227 [chan receive, 42 minutes]:
testing.(*T).Run(0xc001411520, {0x2c46806?, 0x391f200?}, 0xc00138bbf0)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001411520)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:111 +0x5be
testing.tRunner(0xc001411520, 0xc0001e6a80)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2149
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 108 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3961ef0, 0xc000088230}, 0xc0013a3f50, 0xc0013a3f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3961ef0, 0xc000088230}, 0xf0?, 0xc0013a3f50, 0xc0013a3f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3961ef0?, 0xc000088230?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000509fd0?, 0x594424?, 0xc000088af0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 148
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 657 [IO wait, 113 minutes]:
internal/poll.runtime_pollWait(0x7f74df9f2d08, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc000117b80?, 0x2c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000117b80)
	/usr/local/go/src/internal/poll/fd_unix.go:620 +0x295
net.(*netFD).accept(0xc000117b80)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc0022da600)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0022da600)
	/usr/local/go/src/net/tcpsock.go:372 +0x30
net/http.(*Server).Serve(0xc000253b30, {0x3954e40, 0xc0022da600})
	/usr/local/go/src/net/http/server.go:3330 +0x30c
net/http.(*Server).ListenAndServe(0xc000253b30)
	/usr/local/go/src/net/http/server.go:3259 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0x594424?, 0xc000525860)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2213 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 654
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2212 +0x129

                                                
                                                
goroutine 109 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 108
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 218 [select, 115 minutes]:
net/http.(*persistConn).writeLoop(0xc001824120)
	/usr/local/go/src/net/http/transport.go:2519 +0xe7
created by net/http.(*Transport).dialConn in goroutine 247
	/usr/local/go/src/net/http/transport.go:1875 +0x15a5

                                                
                                                
goroutine 223 [select, 115 minutes]:
net/http.(*persistConn).writeLoop(0xc00076ea20)
	/usr/local/go/src/net/http/transport.go:2519 +0xe7
created by net/http.(*Transport).dialConn in goroutine 144
	/usr/local/go/src/net/http/transport.go:1875 +0x15a5

                                                
                                                
goroutine 3107 [sync.Cond.Wait, 6 minutes]:
sync.runtime_notifyListWait(0xc000923b90, 0x1a)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0013a1d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x397df80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000923bc0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002306090, {0x3928fe0, 0xc0014e6090}, 0x1, 0xc000088230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc002306090, 0x3b9aca00, 0x0, 0x1, 0xc000088230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3051
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 147 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x39581e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 146
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 148 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000923040, 0xc000088230)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 146
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x569

                                                
                                                
goroutine 222 [select, 115 minutes]:
net/http.(*persistConn).readLoop(0xc00076ea20)
	/usr/local/go/src/net/http/transport.go:2325 +0xca5
created by net/http.(*Transport).dialConn in goroutine 144
	/usr/local/go/src/net/http/transport.go:1874 +0x154f

                                                
                                                
goroutine 3735 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3961ef0, 0xc000088230}, 0xc001573f50, 0xc001573f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3961ef0, 0xc000088230}, 0x80?, 0xc001573f50, 0xc001573f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3961ef0?, 0xc000088230?}, 0xc001411040?, 0x559fe0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x5943c5?, 0xc00023aa80?, 0xc002242a80?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3694
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 1091 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc0018bdc80, 0xc0018fb260)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 1090
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 3282 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3961ef0, 0xc000088230}, 0xc0021b5f50, 0xc0000cef98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3961ef0, 0xc000088230}, 0x6e?, 0xc0021b5f50, 0xc0021b5f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3961ef0?, 0xc000088230?}, 0x100000000a03516?, 0xc000670a80?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc002307c30?, 0xc001b99600?, 0xc0021b5fa8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3265
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3538 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3489
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2843 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001bdef90, 0x1b)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc000969d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x397df80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001bdefc0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0022fa320, {0x3928fe0, 0xc001ed0240}, 0x1, 0xc000088230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0022fa320, 0x3b9aca00, 0x0, 0x1, 0xc000088230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2876
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 1294 [select, 109 minutes]:
net/http.(*persistConn).writeLoop(0xc0020befc0)
	/usr/local/go/src/net/http/transport.go:2519 +0xe7
created by net/http.(*Transport).dialConn in goroutine 1291
	/usr/local/go/src/net/http/transport.go:1875 +0x15a5

                                                
                                                
goroutine 3634 [syscall, 42 minutes]:
syscall.Syscall6(0xf7, 0x3, 0x17, 0xc000673c50, 0x4, 0xc000083950, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:95 +0x39
os.(*Process).pidfdWait(0xc00191b2d8?)
	/usr/local/go/src/os/pidfd_linux.go:110 +0x236
os.(*Process).wait(0x30?)
	/usr/local/go/src/os/exec_unix.go:27 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc000671c80)
	/usr/local/go/src/os/exec/exec.go:906 +0x45
os/exec.(*Cmd).Run(0xc000671c80)
	/usr/local/go/src/os/exec/exec.go:610 +0x2d
k8s.io/minikube/test/integration.Run(0xc001411860, 0xc000671c80)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc001411860)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc001411860, 0xc00138bbf0)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2227
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2794 [chan receive, 52 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000923f00, 0xc000088230)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2789
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2737 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x39581e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2702
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2149 [chan receive, 40 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc000524820, 0xc001b82fa8)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
created by testing.(*T).Run in goroutine 2053
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3638 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x39581e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3621
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 834 [chan receive, 109 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000923e40, 0xc000088230)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 721
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3637 [select, 42 minutes]:
os/exec.(*Cmd).watchCtx(0xc000671c80, 0xc0018fb730)
	/usr/local/go/src/os/exec/exec.go:773 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3634
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 2752 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000923ed0, 0x1b)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001579d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x397df80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000923f00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001bc8c00, {0x3928fe0, 0xc000894a50}, 0x1, 0xc000088230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001bc8c00, 0x3b9aca00, 0x0, 0x1, 0xc000088230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2794
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 839 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 838
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3693 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x39581e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3748
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 838 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3961ef0, 0xc000088230}, 0xc00096ff50, 0xc0000d3f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3961ef0, 0xc000088230}, 0x30?, 0xc00096ff50, 0xc00096ff98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3961ef0?, 0xc000088230?}, 0xc001570340?, 0x559fe0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xa8fd05?, 0xc0005580c0?, 0x39581e0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 834
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3635 [IO wait, 42 minutes]:
internal/poll.runtime_pollWait(0x7f74df9f2f38, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0014862a0?, 0xc0018f2ba2?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0014862a0, {0xc0018f2ba2, 0x45e, 0x45e})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00208a408, {0xc0018f2ba2?, 0xc000507da8?, 0x22a?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc00138bcb0, {0x39274c0, 0xc000902468})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3927640, 0xc00138bcb0}, {0x39274c0, 0xc000902468}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00208a408?, {0x3927640, 0xc00138bcb0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc00208a408, {0x3927640, 0xc00138bcb0})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3927640, 0xc00138bcb0}, {0x3927540, 0xc00208a408}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc00138bbf0?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3634
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 3345 [chan receive, 42 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000923ac0, 0xc000088230)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3343
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3489 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3961ef0, 0xc000088230}, 0xc001537f50, 0xc001537f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3961ef0, 0xc000088230}, 0x80?, 0xc001537f50, 0xc001537f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3961ef0?, 0xc000088230?}, 0xc001411040?, 0x559fe0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x5943c5?, 0xc000145b00?, 0xc001b6a380?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3520
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3283 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3282
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2793 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x39581e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2789
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3734 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0022dac90, 0x19)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001519d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x397df80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0022dacc0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001bc8290, {0x3928fe0, 0xc002288960}, 0x1, 0xc000088230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001bc8290, 0x3b9aca00, 0x0, 0x1, 0xc000088230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3694
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 1293 [select, 109 minutes]:
net/http.(*persistConn).readLoop(0xc0020befc0)
	/usr/local/go/src/net/http/transport.go:2325 +0xca5
created by net/http.(*Transport).dialConn in goroutine 1291
	/usr/local/go/src/net/http/transport.go:1874 +0x154f

                                                
                                                
goroutine 3388 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc000923a90, 0x1a)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001a5fd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x397df80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000923ac0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001c4c680, {0x3928fe0, 0xc00138b230}, 0x1, 0xc000088230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001c4c680, 0x3b9aca00, 0x0, 0x1, 0xc000088230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3345
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3344 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x39581e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3343
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 837 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc000923e10, 0x2b)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001572d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x397df80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000923e40)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00074ae70, {0x3928fe0, 0xc00138a150}, 0x1, 0xc000088230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00074ae70, 0x3b9aca00, 0x0, 0x1, 0xc000088230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 834
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2844 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3961ef0, 0xc000088230}, 0xc0021b6750, 0xc00139ef98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3961ef0, 0xc000088230}, 0x1c?, 0xc0021b6750, 0xc0021b6798)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3961ef0?, 0xc000088230?}, 0xc001ee2d00?, 0x559fe0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0021b67d0?, 0x594424?, 0xc001691d80?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2876
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2876 [chan receive, 52 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001bdefc0, 0xc000088230)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2871
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3488 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001d7ca50, 0x1a)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001576d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x397df80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001d7ca80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0022fa030, {0x3928fe0, 0xc00161e180}, 0x1, 0xc000088230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0022fa030, 0x3b9aca00, 0x0, 0x1, 0xc000088230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3520
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 1186 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc001c81080, 0xc00168ecb0)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 1073
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 2770 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0020bc850, 0x1c)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001a5bd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x397df80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0020bc880)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001c22290, {0x3928fe0, 0xc001c6c180}, 0x1, 0xc000088230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001c22290, 0x3b9aca00, 0x0, 0x1, 0xc000088230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2754
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3233 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc001c54b10, 0x1a)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc00151bd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x397df80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001c54b40)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002307c40, {0x3928fe0, 0xc00138a480}, 0x1, 0xc000088230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc002307c40, 0x3b9aca00, 0x0, 0x1, 0xc000088230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3265
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2753 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3961ef0, 0xc000088230}, 0xc00050cf50, 0xc000962f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3961ef0, 0xc000088230}, 0xe0?, 0xc00050cf50, 0xc00050cf98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3961ef0?, 0xc000088230?}, 0xc00148e000?, 0x559fe0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x5943c5?, 0xc001d5d500?, 0xc001d840e0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2794
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3668 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3667
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3265 [chan receive, 44 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001c54b40, 0xc000088230)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3263
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2772 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2771
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3389 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3961ef0, 0xc000088230}, 0xc000965f50, 0xc000965f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3961ef0, 0xc000088230}, 0x30?, 0xc000965f50, 0xc000965f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3961ef0?, 0xc000088230?}, 0xc001ee24e0?, 0x559fe0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0009707d0?, 0x594424?, 0xc00168fe30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3345
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2875 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x39581e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2871
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2845 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2844
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2771 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3961ef0, 0xc000088230}, 0xc00096ff50, 0xc001538f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3961ef0, 0xc000088230}, 0x40?, 0xc00096ff50, 0xc00096ff98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3961ef0?, 0xc000088230?}, 0x7e5376?, 0xc000145c80?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x5943c5?, 0xc001606780?, 0xc001644540?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2754
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2754 [chan receive, 52 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0020bc880, 0xc000088230)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2702
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3736 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3735
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3636 [IO wait, 40 minutes]:
internal/poll.runtime_pollWait(0x7f74df9f3168, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001486360?, 0xc002356b98?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001486360, {0xc002356b98, 0x1f468, 0x1f468})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00208a420, {0xc002356b98?, 0x0?, 0x1ff2c?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc00138bce0, {0x39274c0, 0xc001530508})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3927640, 0xc00138bce0}, {0x39274c0, 0xc001530508}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00208a420?, {0x3927640, 0xc00138bce0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc00208a420, {0x3927640, 0xc00138bce0})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3927640, 0xc00138bce0}, {0x3927540, 0xc00208a420}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc0019002a0?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3634
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 2818 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2753
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3390 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3389
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3264 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x39581e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3263
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3694 [chan receive, 40 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0022dacc0, 0xc000088230)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3748
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3519 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x39581e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3441
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3051 [chan receive, 44 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000923bc0, 0xc000088230)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3066
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3108 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3961ef0, 0xc000088230}, 0xc001a5cf50, 0xc001a5cf98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3961ef0, 0xc000088230}, 0x7?, 0xc001a5cf50, 0xc001a5cf98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3961ef0?, 0xc000088230?}, 0xc001ee2680?, 0x559fe0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0005067d0?, 0x594424?, 0xc001b1c1b0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3051
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3109 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3108
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3520 [chan receive, 42 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001d7ca80, 0xc000088230)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3441
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3666 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0020bc550, 0x1a)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc00153dd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x397df80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0020bc580)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002220c40, {0x3928fe0, 0xc001948b70}, 0x1, 0xc000088230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc002220c40, 0x3b9aca00, 0x0, 0x1, 0xc000088230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3639
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x1cf

                                                
                                    

Test pass (234/272)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.82
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.32.1/json-events 3.91
13 TestDownloadOnly/v1.32.1/preload-exists 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.06
18 TestDownloadOnly/v1.32.1/DeleteAll 0.13
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.61
22 TestOffline 83.55
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 208.13
29 TestAddons/serial/Volcano 43.65
31 TestAddons/serial/GCPAuth/Namespaces 0.11
32 TestAddons/serial/GCPAuth/FakeCredentials 8.59
35 TestAddons/parallel/Registry 16.91
36 TestAddons/parallel/Ingress 18.74
37 TestAddons/parallel/InspektorGadget 11.7
38 TestAddons/parallel/MetricsServer 5.84
40 TestAddons/parallel/CSI 38.09
41 TestAddons/parallel/Headlamp 20.57
42 TestAddons/parallel/CloudSpanner 5.59
43 TestAddons/parallel/LocalPath 53.27
44 TestAddons/parallel/NvidiaDevicePlugin 5.53
45 TestAddons/parallel/Yakd 11.75
47 TestAddons/StoppedEnableDisable 91.11
48 TestCertOptions 75.87
49 TestCertExpiration 325.03
51 TestForceSystemdFlag 49.8
52 TestForceSystemdEnv 44.48
54 TestKVMDriverInstallOrUpdate 1.27
58 TestErrorSpam/setup 40.54
59 TestErrorSpam/start 0.34
60 TestErrorSpam/status 0.72
61 TestErrorSpam/pause 1.5
62 TestErrorSpam/unpause 1.68
63 TestErrorSpam/stop 4.66
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 79.14
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 46.77
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.93
75 TestFunctional/serial/CacheCmd/cache/add_local 0.9
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.48
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
83 TestFunctional/serial/ExtraConfig 44.86
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.32
86 TestFunctional/serial/LogsFileCmd 1.34
87 TestFunctional/serial/InvalidService 4.9
89 TestFunctional/parallel/ConfigCmd 0.37
90 TestFunctional/parallel/DashboardCmd 25.23
91 TestFunctional/parallel/DryRun 0.28
92 TestFunctional/parallel/InternationalLanguage 0.14
93 TestFunctional/parallel/StatusCmd 0.77
97 TestFunctional/parallel/ServiceCmdConnect 10.53
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 36.97
101 TestFunctional/parallel/SSHCmd 0.41
102 TestFunctional/parallel/CpCmd 1.36
103 TestFunctional/parallel/MySQL 26.02
104 TestFunctional/parallel/FileSync 0.2
105 TestFunctional/parallel/CertSync 1.19
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.39
113 TestFunctional/parallel/License 0.18
114 TestFunctional/parallel/ServiceCmd/DeployApp 10.2
124 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
125 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
126 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
127 TestFunctional/parallel/Version/short 0.05
128 TestFunctional/parallel/Version/components 0.61
129 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
130 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
131 TestFunctional/parallel/ImageCommands/ImageListJson 0.2
132 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
133 TestFunctional/parallel/ImageCommands/ImageBuild 2.93
134 TestFunctional/parallel/ImageCommands/Setup 0.35
135 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.25
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.1
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.26
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.39
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.46
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.07
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.39
142 TestFunctional/parallel/ServiceCmd/List 0.27
143 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
144 TestFunctional/parallel/ServiceCmd/JSONOutput 0.3
145 TestFunctional/parallel/ProfileCmd/profile_list 0.36
146 TestFunctional/parallel/ServiceCmd/HTTPS 0.35
147 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
148 TestFunctional/parallel/ServiceCmd/Format 0.34
149 TestFunctional/parallel/MountCmd/any-port 15.81
150 TestFunctional/parallel/ServiceCmd/URL 0.32
151 TestFunctional/parallel/MountCmd/specific-port 1.09
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.55
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 195.51
160 TestMultiControlPlane/serial/DeployApp 6.24
161 TestMultiControlPlane/serial/PingHostFromPods 1.2
162 TestMultiControlPlane/serial/AddWorkerNode 53.09
163 TestMultiControlPlane/serial/NodeLabels 0.07
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.84
165 TestMultiControlPlane/serial/CopyFile 12.86
166 TestMultiControlPlane/serial/StopSecondaryNode 91.29
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.64
168 TestMultiControlPlane/serial/RestartSecondaryNode 38.47
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.84
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 441.3
171 TestMultiControlPlane/serial/DeleteSecondaryNode 6.6
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.61
173 TestMultiControlPlane/serial/StopCluster 272.09
174 TestMultiControlPlane/serial/RestartCluster 164.02
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.63
176 TestMultiControlPlane/serial/AddSecondaryNode 73.73
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.86
181 TestJSONOutput/start/Command 84.24
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.68
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.59
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 6.37
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.2
209 TestMainNoArgs 0.05
210 TestMinikubeProfile 88.62
213 TestMountStart/serial/StartWithMountFirst 25.92
214 TestMountStart/serial/VerifyMountFirst 0.36
215 TestMountStart/serial/StartWithMountSecond 25.98
216 TestMountStart/serial/VerifyMountSecond 0.37
217 TestMountStart/serial/DeleteFirst 0.59
218 TestMountStart/serial/VerifyMountPostDelete 0.37
219 TestMountStart/serial/Stop 1.29
220 TestMountStart/serial/RestartStopped 23.32
221 TestMountStart/serial/VerifyMountPostStop 0.37
224 TestMultiNode/serial/FreshStart2Nodes 109.64
225 TestMultiNode/serial/DeployApp2Nodes 3.93
226 TestMultiNode/serial/PingHostFrom2Pods 0.78
227 TestMultiNode/serial/AddNode 48.58
228 TestMultiNode/serial/MultiNodeLabels 0.06
229 TestMultiNode/serial/ProfileList 0.57
230 TestMultiNode/serial/CopyFile 7.15
231 TestMultiNode/serial/StopNode 2.12
232 TestMultiNode/serial/StartAfterStop 33.41
233 TestMultiNode/serial/RestartKeepsNodes 324.13
234 TestMultiNode/serial/DeleteNode 2.03
235 TestMultiNode/serial/StopMultiNode 181.75
236 TestMultiNode/serial/RestartMultiNode 107.09
237 TestMultiNode/serial/ValidateNameConflict 43.48
242 TestPreload 255.13
244 TestScheduledStopUnix 116.91
248 TestRunningBinaryUpgrade 249.53
250 TestKubernetesUpgrade 195.56
253 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
261 TestNoKubernetes/serial/StartWithK8s 93.85
274 TestPause/serial/Start 110.91
275 TestNoKubernetes/serial/StartWithStopK8s 82.64
276 TestPause/serial/SecondStartNoReconfiguration 46.23
277 TestNoKubernetes/serial/Start 25.61
278 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
279 TestNoKubernetes/serial/ProfileList 32.57
280 TestPause/serial/Pause 0.69
281 TestPause/serial/VerifyStatus 0.24
282 TestPause/serial/Unpause 0.59
283 TestPause/serial/PauseAgain 0.71
284 TestPause/serial/DeletePaused 0.67
285 TestPause/serial/VerifyDeletedResources 14.89
286 TestNoKubernetes/serial/Stop 1.3
287 TestNoKubernetes/serial/StartNoArgs 43.54
288 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
289 TestStoppedBinaryUpgrade/Setup 0.45
290 TestStoppedBinaryUpgrade/Upgrade 163.06
292 TestStartStop/group/old-k8s-version/serial/FirstStart 183.55
294 TestStartStop/group/no-preload/serial/FirstStart 109.72
295 TestStoppedBinaryUpgrade/MinikubeLogs 0.89
297 TestStartStop/group/embed-certs/serial/FirstStart 84.59
299 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 82.93
300 TestStartStop/group/old-k8s-version/serial/DeployApp 8.47
301 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.97
302 TestStartStop/group/no-preload/serial/DeployApp 8.32
303 TestStartStop/group/old-k8s-version/serial/Stop 90.67
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.01
305 TestStartStop/group/no-preload/serial/Stop 90.86
306 TestStartStop/group/embed-certs/serial/DeployApp 9.27
307 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.92
308 TestStartStop/group/embed-certs/serial/Stop 90.87
309 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.26
310 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.96
311 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.07
312 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
313 TestStartStop/group/old-k8s-version/serial/SecondStart 160.94
314 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
316 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.25
317 TestStartStop/group/embed-certs/serial/SecondStart 308.56
318 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.25
320 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
321 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.07
322 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
323 TestStartStop/group/old-k8s-version/serial/Pause 2.54
325 TestStartStop/group/newest-cni/serial/FirstStart 51.48
326 TestStartStop/group/newest-cni/serial/DeployApp 0
327 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.09
328 TestStartStop/group/newest-cni/serial/Stop 2.3
329 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
330 TestStartStop/group/newest-cni/serial/SecondStart 33.05
331 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
333 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
334 TestStartStop/group/newest-cni/serial/Pause 2.4
338 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
342 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
343 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
344 TestStartStop/group/embed-certs/serial/Pause 3
x
+
TestDownloadOnly/v1.20.0/json-events (7.82s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-262382 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-262382 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (7.819880118s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.82s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0127 10:32:10.511558  356204 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0127 10:32:10.511666  356204 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20319-348858/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-262382
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-262382: exit status 85 (58.654205ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-262382 | jenkins | v1.35.0 | 27 Jan 25 10:32 UTC |          |
	|         | -p download-only-262382        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 10:32:02
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 10:32:02.734923  356216 out.go:345] Setting OutFile to fd 1 ...
	I0127 10:32:02.735044  356216 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 10:32:02.735054  356216 out.go:358] Setting ErrFile to fd 2...
	I0127 10:32:02.735058  356216 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 10:32:02.735259  356216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-348858/.minikube/bin
	W0127 10:32:02.735432  356216 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20319-348858/.minikube/config/config.json: open /home/jenkins/minikube-integration/20319-348858/.minikube/config/config.json: no such file or directory
	I0127 10:32:02.736052  356216 out.go:352] Setting JSON to true
	I0127 10:32:02.737483  356216 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4468,"bootTime":1737969455,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 10:32:02.737619  356216 start.go:139] virtualization: kvm guest
	I0127 10:32:02.739728  356216 out.go:97] [download-only-262382] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0127 10:32:02.739850  356216 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20319-348858/.minikube/cache/preloaded-tarball: no such file or directory
	I0127 10:32:02.739875  356216 notify.go:220] Checking for updates...
	I0127 10:32:02.741267  356216 out.go:169] MINIKUBE_LOCATION=20319
	I0127 10:32:02.742540  356216 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 10:32:02.743756  356216 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20319-348858/kubeconfig
	I0127 10:32:02.744918  356216 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-348858/.minikube
	I0127 10:32:02.745926  356216 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0127 10:32:02.747759  356216 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0127 10:32:02.748000  356216 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 10:32:02.860004  356216 out.go:97] Using the kvm2 driver based on user configuration
	I0127 10:32:02.860026  356216 start.go:297] selected driver: kvm2
	I0127 10:32:02.860033  356216 start.go:901] validating driver "kvm2" against <nil>
	I0127 10:32:02.860381  356216 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 10:32:02.860506  356216 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20319-348858/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 10:32:02.874940  356216 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 10:32:02.874974  356216 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 10:32:02.875513  356216 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0127 10:32:02.875703  356216 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 10:32:02.875748  356216 cni.go:84] Creating CNI manager for ""
	I0127 10:32:02.875815  356216 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 10:32:02.875830  356216 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 10:32:02.875908  356216 start.go:340] cluster config:
	{Name:download-only-262382 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-262382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 10:32:02.876160  356216 iso.go:125] acquiring lock: {Name:mk6cdd2a3d0bfb3682c1f0c806368944f23c4809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 10:32:02.877527  356216 out.go:97] Downloading VM boot image ...
	I0127 10:32:02.877552  356216 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20319-348858/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 10:32:05.897248  356216 out.go:97] Starting "download-only-262382" primary control-plane node in "download-only-262382" cluster
	I0127 10:32:05.897277  356216 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0127 10:32:05.922547  356216 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0127 10:32:05.922572  356216 cache.go:56] Caching tarball of preloaded images
	I0127 10:32:05.922709  356216 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0127 10:32:05.923964  356216 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0127 10:32:05.923977  356216 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0127 10:32:05.947841  356216 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/20319-348858/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-262382 host does not exist
	  To start a cluster, run: "minikube start -p download-only-262382"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-262382
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (3.91s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-138017 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-138017 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (3.905029186s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (3.91s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0127 10:32:14.736500  356204 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0127 10:32:14.736544  356204 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20319-348858/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-138017
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-138017: exit status 85 (60.649744ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-262382 | jenkins | v1.35.0 | 27 Jan 25 10:32 UTC |                     |
	|         | -p download-only-262382        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 27 Jan 25 10:32 UTC | 27 Jan 25 10:32 UTC |
	| delete  | -p download-only-262382        | download-only-262382 | jenkins | v1.35.0 | 27 Jan 25 10:32 UTC | 27 Jan 25 10:32 UTC |
	| start   | -o=json --download-only        | download-only-138017 | jenkins | v1.35.0 | 27 Jan 25 10:32 UTC |                     |
	|         | -p download-only-138017        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 10:32:10
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 10:32:10.873967  356410 out.go:345] Setting OutFile to fd 1 ...
	I0127 10:32:10.874059  356410 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 10:32:10.874067  356410 out.go:358] Setting ErrFile to fd 2...
	I0127 10:32:10.874071  356410 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 10:32:10.874239  356410 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-348858/.minikube/bin
	I0127 10:32:10.874749  356410 out.go:352] Setting JSON to true
	I0127 10:32:10.875569  356410 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4476,"bootTime":1737969455,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 10:32:10.875677  356410 start.go:139] virtualization: kvm guest
	I0127 10:32:10.877428  356410 out.go:97] [download-only-138017] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 10:32:10.877598  356410 notify.go:220] Checking for updates...
	I0127 10:32:10.878633  356410 out.go:169] MINIKUBE_LOCATION=20319
	I0127 10:32:10.879676  356410 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 10:32:10.880857  356410 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20319-348858/kubeconfig
	I0127 10:32:10.882045  356410 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-348858/.minikube
	I0127 10:32:10.883006  356410 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-138017 host does not exist
	  To start a cluster, run: "minikube start -p download-only-138017"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-138017
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I0127 10:32:15.312766  356204 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-181107 --alsologtostderr --binary-mirror http://127.0.0.1:41983 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-181107" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-181107
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestOffline (83.55s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-895866 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-895866 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m22.632875801s)
helpers_test.go:175: Cleaning up "offline-containerd-895866" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-895866
--- PASS: TestOffline (83.55s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-245022
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-245022: exit status 85 (56.059108ms)

                                                
                                                
-- stdout --
	* Profile "addons-245022" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-245022"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-245022
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-245022: exit status 85 (54.539912ms)

                                                
                                                
-- stdout --
	* Profile "addons-245022" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-245022"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (208.13s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-245022 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-245022 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m28.128716191s)
--- PASS: TestAddons/Setup (208.13s)

                                                
                                    
x
+
TestAddons/serial/Volcano (43.65s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:823: volcano-controller stabilized in 25.456804ms
addons_test.go:815: volcano-admission stabilized in 25.503918ms
addons_test.go:807: volcano-scheduler stabilized in 28.096349ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-7ff7cd6989-2t88x" [11b8b058-8510-42ce-93ba-1236470dbd67] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004352362s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-57676bd54c-fpqtv" [07628eef-8d0b-436a-8faa-9c5b336b36ff] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00596544s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-77df547cdf-lmt7w" [c7d81dd4-0d45-46cc-8b53-373a6f018162] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003516567s
addons_test.go:842: (dbg) Run:  kubectl --context addons-245022 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-245022 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-245022 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [a02392d9-6747-4c40-a854-f78066e7aef1] Pending
helpers_test.go:344: "test-job-nginx-0" [a02392d9-6747-4c40-a854-f78066e7aef1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [a02392d9-6747-4c40-a854-f78066e7aef1] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 16.005035296s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-245022 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-245022 addons disable volcano --alsologtostderr -v=1: (11.26148791s)
--- PASS: TestAddons/serial/Volcano (43.65s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-245022 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-245022 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.59s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-245022 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-245022 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [485d9831-8aee-4b2e-bd8b-fcafa8cf7ee1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [485d9831-8aee-4b2e-bd8b-fcafa8cf7ee1] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003964595s
addons_test.go:633: (dbg) Run:  kubectl --context addons-245022 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-245022 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-245022 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.59s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 4.766766ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-zp684" [8706898c-41f9-4ae5-abbc-8ac3706e8b44] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00454179s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-nr4cr" [2cbd83c3-a5ee-48dd-8263-58243c41217f] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004380075s
addons_test.go:331: (dbg) Run:  kubectl --context addons-245022 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-245022 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-245022 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.127264556s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-245022 ip
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-245022 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.91s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-245022 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-245022 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-245022 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [4116936c-98e7-49a5-b1ab-fbe5a17cd7c0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [4116936c-98e7-49a5-b1ab-fbe5a17cd7c0] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.006412572s
I0127 10:37:21.021092  356204 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-245022 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-245022 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-245022 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.235
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-245022 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-245022 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-245022 addons disable ingress --alsologtostderr -v=1: (7.668924885s)
--- PASS: TestAddons/parallel/Ingress (18.74s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.7s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-lflcp" [fe77f268-0b33-4be5-8f70-744144f3b1b7] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003705218s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-245022 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-245022 addons disable inspektor-gadget --alsologtostderr -v=1: (5.693633746s)
--- PASS: TestAddons/parallel/InspektorGadget (11.70s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.84s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 5.7931ms
I0127 10:36:45.416298  356204 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0127 10:36:45.416327  356204 kapi.go:107] duration metric: took 6.191327ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-sctkt" [26019bf8-dd8d-4b4a-88af-f077c00d6239] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003444345s
addons_test.go:402: (dbg) Run:  kubectl --context addons-245022 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-245022 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.84s)

                                                
                                    
x
+
TestAddons/parallel/CSI (38.09s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:488: csi-hostpath-driver pods stabilized in 6.201867ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-245022 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245022 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245022 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245022 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245022 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245022 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245022 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245022 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-245022 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [ebaf0afa-16b6-4faa-9f4a-f01cce4f4667] Pending
helpers_test.go:344: "task-pv-pod" [ebaf0afa-16b6-4faa-9f4a-f01cce4f4667] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [ebaf0afa-16b6-4faa-9f4a-f01cce4f4667] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.004454916s
addons_test.go:511: (dbg) Run:  kubectl --context addons-245022 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-245022 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-245022 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-245022 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-245022 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-245022 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245022 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
2025/01/27 10:37:01 [DEBUG] GET http://192.168.39.235:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245022 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245022 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245022 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245022 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245022 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245022 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-245022 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [ecad3a36-4d4a-4771-991d-7b841d9b3d2c] Pending
helpers_test.go:344: "task-pv-pod-restore" [ecad3a36-4d4a-4771-991d-7b841d9b3d2c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [ecad3a36-4d4a-4771-991d-7b841d9b3d2c] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005846818s
addons_test.go:553: (dbg) Run:  kubectl --context addons-245022 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-245022 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-245022 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-245022 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-245022 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-245022 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.643792223s)
--- PASS: TestAddons/parallel/CSI (38.09s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-245022 --alsologtostderr -v=1
I0127 10:36:45.410147  356204 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-69d78d796f-ldvg4" [2f07c37c-b948-40de-8943-ee509e98809e] Pending
helpers_test.go:344: "headlamp-69d78d796f-ldvg4" [2f07c37c-b948-40de-8943-ee509e98809e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-69d78d796f-ldvg4" [2f07c37c-b948-40de-8943-ee509e98809e] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.00373955s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-245022 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-245022 addons disable headlamp --alsologtostderr -v=1: (5.705198614s)
--- PASS: TestAddons/parallel/Headlamp (20.57s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-gtv6s" [f2975bb5-d0eb-4ab2-809f-8bcd6686b653] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003582333s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-245022 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.27s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-245022 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-245022 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245022 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [438ac7bf-48d3-4e5f-a12b-246f0a96f99b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [438ac7bf-48d3-4e5f-a12b-246f0a96f99b] Running
helpers_test.go:344: "test-local-path" [438ac7bf-48d3-4e5f-a12b-246f0a96f99b] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [438ac7bf-48d3-4e5f-a12b-246f0a96f99b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.009497362s
addons_test.go:906: (dbg) Run:  kubectl --context addons-245022 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-245022 ssh "cat /opt/local-path-provisioner/pvc-11b7cb84-ce1c-437d-8c1b-fe8185d7099c_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-245022 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-245022 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-245022 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-245022 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.471190619s)
--- PASS: TestAddons/parallel/LocalPath (53.27s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-59srn" [e5a920ab-30c8-46a0-9ff1-0a50fd600af5] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003915253s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-245022 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.53s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-q2l8k" [dd2e0891-d945-4cd8-93b4-535343266ee9] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003790292s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-245022 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-245022 addons disable yakd --alsologtostderr -v=1: (5.743945679s)
--- PASS: TestAddons/parallel/Yakd (11.75s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.11s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-245022
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-245022: (1m30.836113322s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-245022
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-245022
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-245022
--- PASS: TestAddons/StoppedEnableDisable (91.11s)

                                                
                                    
x
+
TestCertOptions (75.87s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-404441 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-404441 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m14.533814508s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-404441 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-404441 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-404441 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-404441" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-404441
--- PASS: TestCertOptions (75.87s)

                                                
                                    
x
+
TestCertExpiration (325.03s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-394938 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-394938 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m31.343540159s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-394938 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-394938 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (52.213452988s)
helpers_test.go:175: Cleaning up "cert-expiration-394938" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-394938
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-394938: (1.472721632s)
--- PASS: TestCertExpiration (325.03s)

                                                
                                    
x
+
TestForceSystemdFlag (49.8s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-647212 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-647212 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (48.657330522s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-647212 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-647212" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-647212
--- PASS: TestForceSystemdFlag (49.80s)

                                                
                                    
x
+
TestForceSystemdEnv (44.48s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-357656 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-357656 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (43.619666664s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-357656 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-357656" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-357656
--- PASS: TestForceSystemdEnv (44.48s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.27s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0127 11:32:37.773910  356204 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 11:32:37.774063  356204 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0127 11:32:37.804511  356204 install.go:62] docker-machine-driver-kvm2: exit status 1
W0127 11:32:37.804836  356204 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0127 11:32:37.804880  356204 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2427008401/001/docker-machine-driver-kvm2
I0127 11:32:37.924469  356204 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2427008401/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0] Decompressors:map[bz2:0xc0004ffc90 gz:0xc0004ffc98 tar:0xc0004ffc40 tar.bz2:0xc0004ffc50 tar.gz:0xc0004ffc60 tar.xz:0xc0004ffc70 tar.zst:0xc0004ffc80 tbz2:0xc0004ffc50 tgz:0xc0004ffc60 txz:0xc0004ffc70 tzst:0xc0004ffc80 xz:0xc0004ffca0 zip:0xc0004ffcb0 zst:0xc0004ffca8] Getters:map[file:0xc001dfec20 http:0xc001a031d0 https:0xc001a03220] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0127 11:32:37.924522  356204 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2427008401/001/docker-machine-driver-kvm2
I0127 11:32:38.513492  356204 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 11:32:38.513607  356204 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0127 11:32:38.551949  356204 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0127 11:32:38.551995  356204 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0127 11:32:38.552074  356204 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0127 11:32:38.552116  356204 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2427008401/002/docker-machine-driver-kvm2
I0127 11:32:38.576158  356204 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2427008401/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0] Decompressors:map[bz2:0xc0004ffc90 gz:0xc0004ffc98 tar:0xc0004ffc40 tar.bz2:0xc0004ffc50 tar.gz:0xc0004ffc60 tar.xz:0xc0004ffc70 tar.zst:0xc0004ffc80 tbz2:0xc0004ffc50 tgz:0xc0004ffc60 txz:0xc0004ffc70 tzst:0xc0004ffc80 xz:0xc0004ffca0 zip:0xc0004ffcb0 zst:0xc0004ffca8] Getters:map[file:0xc001bc81c0 http:0xc00221a280 https:0xc00221a2d0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0127 11:32:38.576215  356204 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2427008401/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.27s)

                                                
                                    
x
+
TestErrorSpam/setup (40.54s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-625948 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-625948 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-625948 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-625948 --driver=kvm2  --container-runtime=containerd: (40.543611524s)
--- PASS: TestErrorSpam/setup (40.54s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-625948 --log_dir /tmp/nospam-625948 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-625948 --log_dir /tmp/nospam-625948 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-625948 --log_dir /tmp/nospam-625948 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-625948 --log_dir /tmp/nospam-625948 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-625948 --log_dir /tmp/nospam-625948 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-625948 --log_dir /tmp/nospam-625948 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-625948 --log_dir /tmp/nospam-625948 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-625948 --log_dir /tmp/nospam-625948 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-625948 --log_dir /tmp/nospam-625948 pause
--- PASS: TestErrorSpam/pause (1.50s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.68s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-625948 --log_dir /tmp/nospam-625948 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-625948 --log_dir /tmp/nospam-625948 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-625948 --log_dir /tmp/nospam-625948 unpause
--- PASS: TestErrorSpam/unpause (1.68s)

                                                
                                    
x
+
TestErrorSpam/stop (4.66s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-625948 --log_dir /tmp/nospam-625948 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-625948 --log_dir /tmp/nospam-625948 stop: (1.319972557s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-625948 --log_dir /tmp/nospam-625948 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-625948 --log_dir /tmp/nospam-625948 stop: (1.480133389s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-625948 --log_dir /tmp/nospam-625948 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-625948 --log_dir /tmp/nospam-625948 stop: (1.8579743s)
--- PASS: TestErrorSpam/stop (4.66s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20319-348858/.minikube/files/etc/test/nested/copy/356204/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.14s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-430173 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0127 10:40:44.114816  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:40:44.121252  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:40:44.132662  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:40:44.154098  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:40:44.195501  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:40:44.276949  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:40:44.438498  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:40:44.760235  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:40:45.402292  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:40:46.683948  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:40:49.245698  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:40:54.367107  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:41:04.608392  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:41:25.090699  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-430173 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m19.134986786s)
--- PASS: TestFunctional/serial/StartWithProxy (79.14s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (46.77s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0127 10:41:37.344591  356204 config.go:182] Loaded profile config "functional-430173": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-430173 --alsologtostderr -v=8
E0127 10:42:06.052273  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-430173 --alsologtostderr -v=8: (46.768106904s)
functional_test.go:663: soft start took 46.768916606s for "functional-430173" cluster.
I0127 10:42:24.113065  356204 config.go:182] Loaded profile config "functional-430173": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (46.77s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-430173 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-430173 cache add registry.k8s.io/pause:3.3: (1.093453334s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.9s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-430173 /tmp/TestFunctionalserialCacheCmdcacheadd_local2802938000/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 cache add minikube-local-cache-test:functional-430173
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 cache delete minikube-local-cache-test:functional-430173
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-430173
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.90s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-430173 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (208.739932ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 kubectl -- --context functional-430173 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-430173 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.86s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-430173 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-430173 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.864054444s)
functional_test.go:761: restart took 44.864177825s for "functional-430173" cluster.
I0127 10:43:15.020127  356204 config.go:182] Loaded profile config "functional-430173": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (44.86s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-430173 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-430173 logs: (1.317428008s)
--- PASS: TestFunctional/serial/LogsCmd (1.32s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 logs --file /tmp/TestFunctionalserialLogsFileCmd4058009933/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-430173 logs --file /tmp/TestFunctionalserialLogsFileCmd4058009933/001/logs.txt: (1.335963123s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.9s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-430173 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-430173
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-430173: exit status 115 (272.085916ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.166:32707 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-430173 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-430173 delete -f testdata/invalidsvc.yaml: (1.433455999s)
--- PASS: TestFunctional/serial/InvalidService (4.90s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-430173 config get cpus: exit status 14 (69.402845ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-430173 config get cpus: exit status 14 (52.267317ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (25.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-430173 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-430173 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 364195: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (25.23s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-430173 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-430173 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (152.323858ms)

                                                
                                                
-- stdout --
	* [functional-430173] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20319-348858/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-348858/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 10:43:34.123425  363927 out.go:345] Setting OutFile to fd 1 ...
	I0127 10:43:34.123741  363927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 10:43:34.123754  363927 out.go:358] Setting ErrFile to fd 2...
	I0127 10:43:34.123761  363927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 10:43:34.124025  363927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-348858/.minikube/bin
	I0127 10:43:34.124756  363927 out.go:352] Setting JSON to false
	I0127 10:43:34.126067  363927 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5159,"bootTime":1737969455,"procs":236,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 10:43:34.126166  363927 start.go:139] virtualization: kvm guest
	I0127 10:43:34.128010  363927 out.go:177] * [functional-430173] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 10:43:34.129757  363927 notify.go:220] Checking for updates...
	I0127 10:43:34.129774  363927 out.go:177]   - MINIKUBE_LOCATION=20319
	I0127 10:43:34.133106  363927 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 10:43:34.134327  363927 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20319-348858/kubeconfig
	I0127 10:43:34.135434  363927 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-348858/.minikube
	I0127 10:43:34.136445  363927 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 10:43:34.137504  363927 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 10:43:34.139003  363927 config.go:182] Loaded profile config "functional-430173": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 10:43:34.139566  363927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 10:43:34.139641  363927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:43:34.157527  363927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38073
	I0127 10:43:34.157987  363927 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:43:34.158785  363927 main.go:141] libmachine: Using API Version  1
	I0127 10:43:34.158814  363927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:43:34.159370  363927 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:43:34.159600  363927 main.go:141] libmachine: (functional-430173) Calling .DriverName
	I0127 10:43:34.159899  363927 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 10:43:34.160324  363927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 10:43:34.160359  363927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:43:34.175227  363927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41127
	I0127 10:43:34.175811  363927 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:43:34.177062  363927 main.go:141] libmachine: Using API Version  1
	I0127 10:43:34.177100  363927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:43:34.177461  363927 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:43:34.177698  363927 main.go:141] libmachine: (functional-430173) Calling .DriverName
	I0127 10:43:34.211779  363927 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 10:43:34.212850  363927 start.go:297] selected driver: kvm2
	I0127 10:43:34.212864  363927 start.go:901] validating driver "kvm2" against &{Name:functional-430173 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-430173 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.166 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 10:43:34.212993  363927 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 10:43:34.214973  363927 out.go:201] 
	W0127 10:43:34.216017  363927 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0127 10:43:34.217075  363927 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-430173 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-430173 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-430173 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (144.247253ms)

                                                
                                                
-- stdout --
	* [functional-430173] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20319-348858/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-348858/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 10:43:34.398675  364015 out.go:345] Setting OutFile to fd 1 ...
	I0127 10:43:34.398782  364015 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 10:43:34.398791  364015 out.go:358] Setting ErrFile to fd 2...
	I0127 10:43:34.398795  364015 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 10:43:34.399077  364015 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-348858/.minikube/bin
	I0127 10:43:34.399613  364015 out.go:352] Setting JSON to false
	I0127 10:43:34.400552  364015 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5159,"bootTime":1737969455,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 10:43:34.400666  364015 start.go:139] virtualization: kvm guest
	I0127 10:43:34.402366  364015 out.go:177] * [functional-430173] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0127 10:43:34.403496  364015 notify.go:220] Checking for updates...
	I0127 10:43:34.403499  364015 out.go:177]   - MINIKUBE_LOCATION=20319
	I0127 10:43:34.405075  364015 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 10:43:34.406594  364015 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20319-348858/kubeconfig
	I0127 10:43:34.407808  364015 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-348858/.minikube
	I0127 10:43:34.408958  364015 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 10:43:34.410137  364015 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 10:43:34.411880  364015 config.go:182] Loaded profile config "functional-430173": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 10:43:34.412466  364015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 10:43:34.412526  364015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:43:34.430805  364015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36267
	I0127 10:43:34.431232  364015 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:43:34.431854  364015 main.go:141] libmachine: Using API Version  1
	I0127 10:43:34.431885  364015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:43:34.432227  364015 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:43:34.432426  364015 main.go:141] libmachine: (functional-430173) Calling .DriverName
	I0127 10:43:34.432672  364015 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 10:43:34.432965  364015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 10:43:34.433002  364015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:43:34.448940  364015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34745
	I0127 10:43:34.449406  364015 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:43:34.450007  364015 main.go:141] libmachine: Using API Version  1
	I0127 10:43:34.450029  364015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:43:34.450387  364015 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:43:34.450617  364015 main.go:141] libmachine: (functional-430173) Calling .DriverName
	I0127 10:43:34.486297  364015 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0127 10:43:34.487419  364015 start.go:297] selected driver: kvm2
	I0127 10:43:34.487434  364015 start.go:901] validating driver "kvm2" against &{Name:functional-430173 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-430173 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.166 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 10:43:34.487529  364015 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 10:43:34.489714  364015 out.go:201] 
	W0127 10:43:34.490959  364015 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0127 10:43:34.492083  364015 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-430173 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-430173 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-48txj" [44f193db-963f-43cf-b135-7e3327795701] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-48txj" [44f193db-963f-43cf-b135-7e3327795701] Running
E0127 10:43:27.974032  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004662134s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.50.166:31217
functional_test.go:1675: http://192.168.50.166:31217: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-48txj

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.166:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.166:31217
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.53s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (36.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [89b6ac13-6220-4ded-ad10-bfee9e4ab626] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003857723s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-430173 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-430173 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-430173 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-430173 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-430173 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [41719769-c359-455a-b6e9-eeba3dc42330] Pending
helpers_test.go:344: "sp-pod" [41719769-c359-455a-b6e9-eeba3dc42330] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [41719769-c359-455a-b6e9-eeba3dc42330] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003896343s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-430173 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-430173 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-430173 delete -f testdata/storage-provisioner/pod.yaml: (1.837062959s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-430173 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ab64de5a-2f54-4de5-b2a9-e717214b7f9a] Pending
helpers_test.go:344: "sp-pod" [ab64de5a-2f54-4de5-b2a9-e717214b7f9a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ab64de5a-2f54-4de5-b2a9-e717214b7f9a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.004123538s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-430173 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (36.97s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 ssh -n functional-430173 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 cp functional-430173:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1615595727/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 ssh -n functional-430173 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 ssh -n functional-430173 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-430173 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-7p6lv" [3583b0cc-5353-405b-851c-6bd6010ce37c] Pending
helpers_test.go:344: "mysql-58ccfd96bb-7p6lv" [3583b0cc-5353-405b-851c-6bd6010ce37c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-7p6lv" [3583b0cc-5353-405b-851c-6bd6010ce37c] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 16.003370455s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-430173 exec mysql-58ccfd96bb-7p6lv -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-430173 exec mysql-58ccfd96bb-7p6lv -- mysql -ppassword -e "show databases;": exit status 1 (141.658299ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0127 10:43:51.137506  356204 retry.go:31] will retry after 576.455529ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-430173 exec mysql-58ccfd96bb-7p6lv -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-430173 exec mysql-58ccfd96bb-7p6lv -- mysql -ppassword -e "show databases;": exit status 1 (207.180656ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0127 10:43:51.921735  356204 retry.go:31] will retry after 890.936071ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-430173 exec mysql-58ccfd96bb-7p6lv -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-430173 exec mysql-58ccfd96bb-7p6lv -- mysql -ppassword -e "show databases;": exit status 1 (208.987571ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0127 10:43:53.022420  356204 retry.go:31] will retry after 2.996320602s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-430173 exec mysql-58ccfd96bb-7p6lv -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-430173 exec mysql-58ccfd96bb-7p6lv -- mysql -ppassword -e "show databases;": exit status 1 (210.321985ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0127 10:43:56.229590  356204 retry.go:31] will retry after 4.16562386s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-430173 exec mysql-58ccfd96bb-7p6lv -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.02s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/356204/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 ssh "sudo cat /etc/test/nested/copy/356204/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/356204.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 ssh "sudo cat /etc/ssl/certs/356204.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/356204.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 ssh "sudo cat /usr/share/ca-certificates/356204.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3562042.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 ssh "sudo cat /etc/ssl/certs/3562042.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/3562042.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 ssh "sudo cat /usr/share/ca-certificates/3562042.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-430173 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-430173 ssh "sudo systemctl is-active docker": exit status 1 (190.051453ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-430173 ssh "sudo systemctl is-active crio": exit status 1 (199.740832ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-430173 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-430173 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-9j8b8" [f65c645f-ed7b-46c8-b2b6-1b2215dc72b7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-9j8b8" [f65c645f-ed7b-46c8-b2b6-1b2215dc72b7] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.003009276s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-430173 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-430173
docker.io/kindest/kindnetd:v20241108-5c6d2daf
docker.io/kicbase/echo-server:functional-430173
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-430173 image ls --format short --alsologtostderr:
I0127 10:43:53.369901  364860 out.go:345] Setting OutFile to fd 1 ...
I0127 10:43:53.370036  364860 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 10:43:53.370049  364860 out.go:358] Setting ErrFile to fd 2...
I0127 10:43:53.370056  364860 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 10:43:53.370271  364860 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-348858/.minikube/bin
I0127 10:43:53.370879  364860 config.go:182] Loaded profile config "functional-430173": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 10:43:53.370977  364860 config.go:182] Loaded profile config "functional-430173": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 10:43:53.371321  364860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 10:43:53.371391  364860 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 10:43:53.388031  364860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39281
I0127 10:43:53.388494  364860 main.go:141] libmachine: () Calling .GetVersion
I0127 10:43:53.389069  364860 main.go:141] libmachine: Using API Version  1
I0127 10:43:53.389100  364860 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 10:43:53.389479  364860 main.go:141] libmachine: () Calling .GetMachineName
I0127 10:43:53.389757  364860 main.go:141] libmachine: (functional-430173) Calling .GetState
I0127 10:43:53.391588  364860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 10:43:53.391646  364860 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 10:43:53.407289  364860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35543
I0127 10:43:53.407765  364860 main.go:141] libmachine: () Calling .GetVersion
I0127 10:43:53.408293  364860 main.go:141] libmachine: Using API Version  1
I0127 10:43:53.408326  364860 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 10:43:53.408641  364860 main.go:141] libmachine: () Calling .GetMachineName
I0127 10:43:53.408817  364860 main.go:141] libmachine: (functional-430173) Calling .DriverName
I0127 10:43:53.408993  364860 ssh_runner.go:195] Run: systemctl --version
I0127 10:43:53.409020  364860 main.go:141] libmachine: (functional-430173) Calling .GetSSHHostname
I0127 10:43:53.411602  364860 main.go:141] libmachine: (functional-430173) DBG | domain functional-430173 has defined MAC address 52:54:00:6d:b3:40 in network mk-functional-430173
I0127 10:43:53.411966  364860 main.go:141] libmachine: (functional-430173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:b3:40", ip: ""} in network mk-functional-430173: {Iface:virbr1 ExpiryTime:2025-01-27 11:40:32 +0000 UTC Type:0 Mac:52:54:00:6d:b3:40 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:functional-430173 Clientid:01:52:54:00:6d:b3:40}
I0127 10:43:53.412002  364860 main.go:141] libmachine: (functional-430173) DBG | domain functional-430173 has defined IP address 192.168.50.166 and MAC address 52:54:00:6d:b3:40 in network mk-functional-430173
I0127 10:43:53.412165  364860 main.go:141] libmachine: (functional-430173) Calling .GetSSHPort
I0127 10:43:53.412343  364860 main.go:141] libmachine: (functional-430173) Calling .GetSSHKeyPath
I0127 10:43:53.412482  364860 main.go:141] libmachine: (functional-430173) Calling .GetSSHUsername
I0127 10:43:53.412609  364860 sshutil.go:53] new ssh client: &{IP:192.168.50.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/functional-430173/id_rsa Username:docker}
I0127 10:43:53.493712  364860 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 10:43:53.544853  364860 main.go:141] libmachine: Making call to close driver server
I0127 10:43:53.544872  364860 main.go:141] libmachine: (functional-430173) Calling .Close
I0127 10:43:53.545175  364860 main.go:141] libmachine: Successfully made call to close driver server
I0127 10:43:53.545202  364860 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 10:43:53.545212  364860 main.go:141] libmachine: Making call to close driver server
I0127 10:43:53.545220  364860 main.go:141] libmachine: (functional-430173) Calling .Close
I0127 10:43:53.545486  364860 main.go:141] libmachine: Successfully made call to close driver server
I0127 10:43:53.545508  364860 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 10:43:53.545515  364860 main.go:141] libmachine: (functional-430173) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-430173 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager     | v1.32.1            | sha256:019ee1 | 26.3MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| localhost/my-image                          | functional-430173  | sha256:1402c8 | 775kB  |
| registry.k8s.io/kube-scheduler              | v1.32.1            | sha256:2b0d65 | 20.7MB |
| docker.io/library/nginx                     | latest             | sha256:9bea9f | 72.1MB |
| registry.k8s.io/etcd                        | 3.5.16-0           | sha256:a9e7e6 | 57.7MB |
| registry.k8s.io/pause                       | 3.10               | sha256:873ed7 | 320kB  |
| docker.io/kindest/kindnetd                  | v20241108-5c6d2daf | sha256:50415e | 38.6MB |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:c69fa2 | 18.6MB |
| docker.io/library/mysql                     | 5.7                | sha256:510733 | 138MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/kube-apiserver              | v1.32.1            | sha256:95c0bd | 28.7MB |
| registry.k8s.io/kube-proxy                  | v1.32.1            | sha256:e29f9c | 30.9MB |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| docker.io/kicbase/echo-server               | functional-430173  | sha256:9056ab | 2.37MB |
| docker.io/library/minikube-local-cache-test | functional-430173  | sha256:ce8248 | 992B   |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-430173 image ls --format table --alsologtostderr:
I0127 10:43:56.954705  365019 out.go:345] Setting OutFile to fd 1 ...
I0127 10:43:56.954813  365019 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 10:43:56.954822  365019 out.go:358] Setting ErrFile to fd 2...
I0127 10:43:56.954826  365019 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 10:43:56.954980  365019 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-348858/.minikube/bin
I0127 10:43:56.955544  365019 config.go:182] Loaded profile config "functional-430173": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 10:43:56.955638  365019 config.go:182] Loaded profile config "functional-430173": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 10:43:56.955971  365019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 10:43:56.956025  365019 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 10:43:56.970871  365019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37169
I0127 10:43:56.971298  365019 main.go:141] libmachine: () Calling .GetVersion
I0127 10:43:56.971785  365019 main.go:141] libmachine: Using API Version  1
I0127 10:43:56.971813  365019 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 10:43:56.972149  365019 main.go:141] libmachine: () Calling .GetMachineName
I0127 10:43:56.972330  365019 main.go:141] libmachine: (functional-430173) Calling .GetState
I0127 10:43:56.974096  365019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 10:43:56.974142  365019 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 10:43:56.988470  365019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36031
I0127 10:43:56.988869  365019 main.go:141] libmachine: () Calling .GetVersion
I0127 10:43:56.989309  365019 main.go:141] libmachine: Using API Version  1
I0127 10:43:56.989334  365019 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 10:43:56.989694  365019 main.go:141] libmachine: () Calling .GetMachineName
I0127 10:43:56.989897  365019 main.go:141] libmachine: (functional-430173) Calling .DriverName
I0127 10:43:56.990095  365019 ssh_runner.go:195] Run: systemctl --version
I0127 10:43:56.990120  365019 main.go:141] libmachine: (functional-430173) Calling .GetSSHHostname
I0127 10:43:56.992489  365019 main.go:141] libmachine: (functional-430173) DBG | domain functional-430173 has defined MAC address 52:54:00:6d:b3:40 in network mk-functional-430173
I0127 10:43:56.992893  365019 main.go:141] libmachine: (functional-430173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:b3:40", ip: ""} in network mk-functional-430173: {Iface:virbr1 ExpiryTime:2025-01-27 11:40:32 +0000 UTC Type:0 Mac:52:54:00:6d:b3:40 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:functional-430173 Clientid:01:52:54:00:6d:b3:40}
I0127 10:43:56.992926  365019 main.go:141] libmachine: (functional-430173) DBG | domain functional-430173 has defined IP address 192.168.50.166 and MAC address 52:54:00:6d:b3:40 in network mk-functional-430173
I0127 10:43:56.993060  365019 main.go:141] libmachine: (functional-430173) Calling .GetSSHPort
I0127 10:43:56.993263  365019 main.go:141] libmachine: (functional-430173) Calling .GetSSHKeyPath
I0127 10:43:56.993414  365019 main.go:141] libmachine: (functional-430173) Calling .GetSSHUsername
I0127 10:43:56.993547  365019 sshutil.go:53] new ssh client: &{IP:192.168.50.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/functional-430173/id_rsa Username:docker}
I0127 10:43:57.076016  365019 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 10:43:57.120640  365019 main.go:141] libmachine: Making call to close driver server
I0127 10:43:57.120658  365019 main.go:141] libmachine: (functional-430173) Calling .Close
I0127 10:43:57.120943  365019 main.go:141] libmachine: Successfully made call to close driver server
I0127 10:43:57.120997  365019 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 10:43:57.121001  365019 main.go:141] libmachine: (functional-430173) DBG | Closing plugin on server side
I0127 10:43:57.121016  365019 main.go:141] libmachine: Making call to close driver server
I0127 10:43:57.121031  365019 main.go:141] libmachine: (functional-430173) Calling .Close
I0127 10:43:57.121250  365019 main.go:141] libmachine: Successfully made call to close driver server
I0127 10:43:57.121274  365019 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 10:43:57.121303  365019 main.go:141] libmachine: (functional-430173) DBG | Closing plugin on server side
2025/01/27 10:43:59 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-430173 image ls --format json --alsologtostderr:
[{"id":"sha256:ce8248c537ce9499ce0252d1cee66ad6a8e2aa59e72b6926eec7ae9afb79b877","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-430173"],"size":"992"},{"id":"sha256:1402c83a89eca5ecaae105df5b67f534e93cec5f7b88aac05357c269bf6400b1","repoDigests":[],"repoTags":["localhost/my-image:functional-430173"],"size":"774888"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"28671624"},{"id":"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a","repoDigests":["registry.k8s.io/kube-proxy@sh
a256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"30908485"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"18562039"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"r
epoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-430173"],"size":"2372971"},{"id":"sha256:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e","repoDigests":["docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"38601118"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikub
e/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"57680541"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1","repoDigests":["docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a"],"repoTags":["docker.io/library/nginx:latest"],"size":"72080558"},{"id":"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35","
repoDigests":["registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"26258470"},{"id":"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"20657536"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"320368"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-430173 image ls --format json --alsologtostderr:
I0127 10:43:56.746113  364995 out.go:345] Setting OutFile to fd 1 ...
I0127 10:43:56.746236  364995 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 10:43:56.746248  364995 out.go:358] Setting ErrFile to fd 2...
I0127 10:43:56.746255  364995 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 10:43:56.746457  364995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-348858/.minikube/bin
I0127 10:43:56.747133  364995 config.go:182] Loaded profile config "functional-430173": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 10:43:56.747238  364995 config.go:182] Loaded profile config "functional-430173": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 10:43:56.747729  364995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 10:43:56.747805  364995 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 10:43:56.762657  364995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33131
I0127 10:43:56.763135  364995 main.go:141] libmachine: () Calling .GetVersion
I0127 10:43:56.763673  364995 main.go:141] libmachine: Using API Version  1
I0127 10:43:56.763696  364995 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 10:43:56.764019  364995 main.go:141] libmachine: () Calling .GetMachineName
I0127 10:43:56.764205  364995 main.go:141] libmachine: (functional-430173) Calling .GetState
I0127 10:43:56.765866  364995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 10:43:56.765910  364995 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 10:43:56.779815  364995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37291
I0127 10:43:56.780281  364995 main.go:141] libmachine: () Calling .GetVersion
I0127 10:43:56.780740  364995 main.go:141] libmachine: Using API Version  1
I0127 10:43:56.780764  364995 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 10:43:56.781058  364995 main.go:141] libmachine: () Calling .GetMachineName
I0127 10:43:56.781242  364995 main.go:141] libmachine: (functional-430173) Calling .DriverName
I0127 10:43:56.781421  364995 ssh_runner.go:195] Run: systemctl --version
I0127 10:43:56.781444  364995 main.go:141] libmachine: (functional-430173) Calling .GetSSHHostname
I0127 10:43:56.784039  364995 main.go:141] libmachine: (functional-430173) DBG | domain functional-430173 has defined MAC address 52:54:00:6d:b3:40 in network mk-functional-430173
I0127 10:43:56.784381  364995 main.go:141] libmachine: (functional-430173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:b3:40", ip: ""} in network mk-functional-430173: {Iface:virbr1 ExpiryTime:2025-01-27 11:40:32 +0000 UTC Type:0 Mac:52:54:00:6d:b3:40 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:functional-430173 Clientid:01:52:54:00:6d:b3:40}
I0127 10:43:56.784422  364995 main.go:141] libmachine: (functional-430173) DBG | domain functional-430173 has defined IP address 192.168.50.166 and MAC address 52:54:00:6d:b3:40 in network mk-functional-430173
I0127 10:43:56.784559  364995 main.go:141] libmachine: (functional-430173) Calling .GetSSHPort
I0127 10:43:56.784726  364995 main.go:141] libmachine: (functional-430173) Calling .GetSSHKeyPath
I0127 10:43:56.784890  364995 main.go:141] libmachine: (functional-430173) Calling .GetSSHUsername
I0127 10:43:56.785018  364995 sshutil.go:53] new ssh client: &{IP:192.168.50.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/functional-430173/id_rsa Username:docker}
I0127 10:43:56.864565  364995 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 10:43:56.900402  364995 main.go:141] libmachine: Making call to close driver server
I0127 10:43:56.900412  364995 main.go:141] libmachine: (functional-430173) Calling .Close
I0127 10:43:56.900679  364995 main.go:141] libmachine: Successfully made call to close driver server
I0127 10:43:56.900707  364995 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 10:43:56.900715  364995 main.go:141] libmachine: (functional-430173) DBG | Closing plugin on server side
I0127 10:43:56.900721  364995 main.go:141] libmachine: Making call to close driver server
I0127 10:43:56.900746  364995 main.go:141] libmachine: (functional-430173) Calling .Close
I0127 10:43:56.900995  364995 main.go:141] libmachine: (functional-430173) DBG | Closing plugin on server side
I0127 10:43:56.900998  364995 main.go:141] libmachine: Successfully made call to close driver server
I0127 10:43:56.901033  364995 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-430173 image ls --format yaml --alsologtostderr:
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "28671624"
- id: sha256:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e
repoDigests:
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "38601118"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "320368"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1
repoDigests:
- docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a
repoTags:
- docker.io/library/nginx:latest
size: "72080558"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "18562039"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "57680541"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:ce8248c537ce9499ce0252d1cee66ad6a8e2aa59e72b6926eec7ae9afb79b877
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-430173
size: "992"
- id: sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "26258470"
- id: sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "30908485"
- id: sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "20657536"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-430173
size: "2372971"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-430173 image ls --format yaml --alsologtostderr:
I0127 10:43:53.599175  364884 out.go:345] Setting OutFile to fd 1 ...
I0127 10:43:53.599703  364884 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 10:43:53.599724  364884 out.go:358] Setting ErrFile to fd 2...
I0127 10:43:53.599730  364884 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 10:43:53.600156  364884 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-348858/.minikube/bin
I0127 10:43:53.601147  364884 config.go:182] Loaded profile config "functional-430173": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 10:43:53.601266  364884 config.go:182] Loaded profile config "functional-430173": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 10:43:53.601675  364884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 10:43:53.601718  364884 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 10:43:53.618302  364884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35055
I0127 10:43:53.618808  364884 main.go:141] libmachine: () Calling .GetVersion
I0127 10:43:53.619423  364884 main.go:141] libmachine: Using API Version  1
I0127 10:43:53.619448  364884 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 10:43:53.619807  364884 main.go:141] libmachine: () Calling .GetMachineName
I0127 10:43:53.620020  364884 main.go:141] libmachine: (functional-430173) Calling .GetState
I0127 10:43:53.622008  364884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 10:43:53.622062  364884 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 10:43:53.636777  364884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33757
I0127 10:43:53.637261  364884 main.go:141] libmachine: () Calling .GetVersion
I0127 10:43:53.637729  364884 main.go:141] libmachine: Using API Version  1
I0127 10:43:53.637750  364884 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 10:43:53.638038  364884 main.go:141] libmachine: () Calling .GetMachineName
I0127 10:43:53.638249  364884 main.go:141] libmachine: (functional-430173) Calling .DriverName
I0127 10:43:53.638433  364884 ssh_runner.go:195] Run: systemctl --version
I0127 10:43:53.638456  364884 main.go:141] libmachine: (functional-430173) Calling .GetSSHHostname
I0127 10:43:53.640792  364884 main.go:141] libmachine: (functional-430173) DBG | domain functional-430173 has defined MAC address 52:54:00:6d:b3:40 in network mk-functional-430173
I0127 10:43:53.641196  364884 main.go:141] libmachine: (functional-430173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:b3:40", ip: ""} in network mk-functional-430173: {Iface:virbr1 ExpiryTime:2025-01-27 11:40:32 +0000 UTC Type:0 Mac:52:54:00:6d:b3:40 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:functional-430173 Clientid:01:52:54:00:6d:b3:40}
I0127 10:43:53.641213  364884 main.go:141] libmachine: (functional-430173) DBG | domain functional-430173 has defined IP address 192.168.50.166 and MAC address 52:54:00:6d:b3:40 in network mk-functional-430173
I0127 10:43:53.641368  364884 main.go:141] libmachine: (functional-430173) Calling .GetSSHPort
I0127 10:43:53.641533  364884 main.go:141] libmachine: (functional-430173) Calling .GetSSHKeyPath
I0127 10:43:53.641690  364884 main.go:141] libmachine: (functional-430173) Calling .GetSSHUsername
I0127 10:43:53.641833  364884 sshutil.go:53] new ssh client: &{IP:192.168.50.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/functional-430173/id_rsa Username:docker}
I0127 10:43:53.719807  364884 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 10:43:53.763732  364884 main.go:141] libmachine: Making call to close driver server
I0127 10:43:53.763751  364884 main.go:141] libmachine: (functional-430173) Calling .Close
I0127 10:43:53.763978  364884 main.go:141] libmachine: Successfully made call to close driver server
I0127 10:43:53.764004  364884 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 10:43:53.764026  364884 main.go:141] libmachine: Making call to close driver server
I0127 10:43:53.764032  364884 main.go:141] libmachine: (functional-430173) DBG | Closing plugin on server side
I0127 10:43:53.764038  364884 main.go:141] libmachine: (functional-430173) Calling .Close
I0127 10:43:53.764358  364884 main.go:141] libmachine: Successfully made call to close driver server
I0127 10:43:53.764373  364884 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-430173 ssh pgrep buildkitd: exit status 1 (194.082168ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 image build -t localhost/my-image:functional-430173 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-430173 image build -t localhost/my-image:functional-430173 testdata/build --alsologtostderr: (2.530416857s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-430173 image build -t localhost/my-image:functional-430173 testdata/build --alsologtostderr:
I0127 10:43:54.011669  364938 out.go:345] Setting OutFile to fd 1 ...
I0127 10:43:54.011809  364938 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 10:43:54.011820  364938 out.go:358] Setting ErrFile to fd 2...
I0127 10:43:54.011826  364938 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 10:43:54.011991  364938 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-348858/.minikube/bin
I0127 10:43:54.012579  364938 config.go:182] Loaded profile config "functional-430173": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 10:43:54.013175  364938 config.go:182] Loaded profile config "functional-430173": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 10:43:54.013539  364938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 10:43:54.013609  364938 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 10:43:54.029573  364938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40043
I0127 10:43:54.029991  364938 main.go:141] libmachine: () Calling .GetVersion
I0127 10:43:54.030587  364938 main.go:141] libmachine: Using API Version  1
I0127 10:43:54.030618  364938 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 10:43:54.031040  364938 main.go:141] libmachine: () Calling .GetMachineName
I0127 10:43:54.031237  364938 main.go:141] libmachine: (functional-430173) Calling .GetState
I0127 10:43:54.032986  364938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 10:43:54.033021  364938 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 10:43:54.047884  364938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46643
I0127 10:43:54.048312  364938 main.go:141] libmachine: () Calling .GetVersion
I0127 10:43:54.048754  364938 main.go:141] libmachine: Using API Version  1
I0127 10:43:54.048776  364938 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 10:43:54.049078  364938 main.go:141] libmachine: () Calling .GetMachineName
I0127 10:43:54.049235  364938 main.go:141] libmachine: (functional-430173) Calling .DriverName
I0127 10:43:54.049407  364938 ssh_runner.go:195] Run: systemctl --version
I0127 10:43:54.049431  364938 main.go:141] libmachine: (functional-430173) Calling .GetSSHHostname
I0127 10:43:54.052019  364938 main.go:141] libmachine: (functional-430173) DBG | domain functional-430173 has defined MAC address 52:54:00:6d:b3:40 in network mk-functional-430173
I0127 10:43:54.052405  364938 main.go:141] libmachine: (functional-430173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:b3:40", ip: ""} in network mk-functional-430173: {Iface:virbr1 ExpiryTime:2025-01-27 11:40:32 +0000 UTC Type:0 Mac:52:54:00:6d:b3:40 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:functional-430173 Clientid:01:52:54:00:6d:b3:40}
I0127 10:43:54.052432  364938 main.go:141] libmachine: (functional-430173) DBG | domain functional-430173 has defined IP address 192.168.50.166 and MAC address 52:54:00:6d:b3:40 in network mk-functional-430173
I0127 10:43:54.052560  364938 main.go:141] libmachine: (functional-430173) Calling .GetSSHPort
I0127 10:43:54.052717  364938 main.go:141] libmachine: (functional-430173) Calling .GetSSHKeyPath
I0127 10:43:54.052831  364938 main.go:141] libmachine: (functional-430173) Calling .GetSSHUsername
I0127 10:43:54.052962  364938 sshutil.go:53] new ssh client: &{IP:192.168.50.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/functional-430173/id_rsa Username:docker}
I0127 10:43:54.135259  364938 build_images.go:161] Building image from path: /tmp/build.427562117.tar
I0127 10:43:54.135308  364938 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0127 10:43:54.145995  364938 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.427562117.tar
I0127 10:43:54.150234  364938 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.427562117.tar: stat -c "%s %y" /var/lib/minikube/build/build.427562117.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.427562117.tar': No such file or directory
I0127 10:43:54.150255  364938 ssh_runner.go:362] scp /tmp/build.427562117.tar --> /var/lib/minikube/build/build.427562117.tar (3072 bytes)
I0127 10:43:54.178609  364938 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.427562117
I0127 10:43:54.191108  364938 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.427562117 -xf /var/lib/minikube/build/build.427562117.tar
I0127 10:43:54.200600  364938 containerd.go:394] Building image: /var/lib/minikube/build/build.427562117
I0127 10:43:54.200655  364938 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.427562117 --local dockerfile=/var/lib/minikube/build/build.427562117 --output type=image,name=localhost/my-image:functional-430173
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:d0beee6fd8350de90bcbd8b260c9488a496d6ce405743d24c032dc7d024c83d2
#8 exporting manifest sha256:d0beee6fd8350de90bcbd8b260c9488a496d6ce405743d24c032dc7d024c83d2 0.0s done
#8 exporting config sha256:1402c83a89eca5ecaae105df5b67f534e93cec5f7b88aac05357c269bf6400b1 0.0s done
#8 naming to localhost/my-image:functional-430173 done
#8 DONE 0.2s
I0127 10:43:56.455856  364938 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.427562117 --local dockerfile=/var/lib/minikube/build/build.427562117 --output type=image,name=localhost/my-image:functional-430173: (2.255165757s)
I0127 10:43:56.455955  364938 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.427562117
I0127 10:43:56.473746  364938 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.427562117.tar
I0127 10:43:56.488639  364938 build_images.go:217] Built localhost/my-image:functional-430173 from /tmp/build.427562117.tar
I0127 10:43:56.488674  364938 build_images.go:133] succeeded building to: functional-430173
I0127 10:43:56.488680  364938 build_images.go:134] failed building to: 
I0127 10:43:56.488713  364938 main.go:141] libmachine: Making call to close driver server
I0127 10:43:56.488755  364938 main.go:141] libmachine: (functional-430173) Calling .Close
I0127 10:43:56.489039  364938 main.go:141] libmachine: Successfully made call to close driver server
I0127 10:43:56.489061  364938 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 10:43:56.489076  364938 main.go:141] libmachine: Making call to close driver server
I0127 10:43:56.489078  364938 main.go:141] libmachine: (functional-430173) DBG | Closing plugin on server side
I0127 10:43:56.489084  364938 main.go:141] libmachine: (functional-430173) Calling .Close
I0127 10:43:56.489598  364938 main.go:141] libmachine: Successfully made call to close driver server
I0127 10:43:56.489617  364938 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 10:43:56.489695  364938 main.go:141] libmachine: (functional-430173) DBG | Closing plugin on server side
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-430173
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 image load --daemon kicbase/echo-server:functional-430173 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-430173 image load --daemon kicbase/echo-server:functional-430173 --alsologtostderr: (1.039554014s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 image load --daemon kicbase/echo-server:functional-430173 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
I0127 10:43:28.327115  356204 retry.go:31] will retry after 1.308468754s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:93f770be-6089-4cb0-8614-5052d0ac1f93 ResourceVersion:730 Generation:0 CreationTimestamp:2025-01-27 10:43:28 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-93f770be-6089-4cb0-8614-5052d0ac1f93 StorageClassName:0xc000a1ff70 VolumeMode:0xc000890020 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-430173
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 image load --daemon kicbase/echo-server:functional-430173 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 image save kicbase/echo-server:functional-430173 /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 image rm kicbase/echo-server:functional-430173 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-430173
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 image save --daemon kicbase/echo-server:functional-430173 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-430173
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 service list -o json
functional_test.go:1494: Took "303.243833ms" to run "out/minikube-linux-amd64 -p functional-430173 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "307.588266ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "56.666891ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.50.166:31987
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "347.621568ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "72.8079ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (15.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-430173 /tmp/TestFunctionalparallelMountCmdany-port1341578921/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1737974613898912362" to /tmp/TestFunctionalparallelMountCmdany-port1341578921/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1737974613898912362" to /tmp/TestFunctionalparallelMountCmdany-port1341578921/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1737974613898912362" to /tmp/TestFunctionalparallelMountCmdany-port1341578921/001/test-1737974613898912362
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-430173 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (252.411253ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 10:43:34.151705  356204 retry.go:31] will retry after 602.333767ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 27 10:43 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 27 10:43 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 27 10:43 test-1737974613898912362
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 ssh cat /mount-9p/test-1737974613898912362
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-430173 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e036f932-31df-4e3f-96a6-3b6a77031c4c] Pending
helpers_test.go:344: "busybox-mount" [e036f932-31df-4e3f-96a6-3b6a77031c4c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e036f932-31df-4e3f-96a6-3b6a77031c4c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e036f932-31df-4e3f-96a6-3b6a77031c4c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 13.004309147s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-430173 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-430173 /tmp/TestFunctionalparallelMountCmdany-port1341578921/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (15.81s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.50.166:31987
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-430173 /tmp/TestFunctionalparallelMountCmdspecific-port2502159674/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-430173 /tmp/TestFunctionalparallelMountCmdspecific-port2502159674/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-430173 ssh "sudo umount -f /mount-9p": exit status 1 (238.236599ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-430173 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-430173 /tmp/TestFunctionalparallelMountCmdspecific-port2502159674/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-430173 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3400373692/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-430173 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3400373692/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-430173 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3400373692/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-430173 ssh "findmnt -T" /mount1: exit status 1 (277.116127ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 10:43:51.085065  356204 retry.go:31] will retry after 527.122458ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-430173 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-430173 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-430173 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3400373692/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-430173 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3400373692/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-430173 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3400373692/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.55s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-430173
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-430173
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-430173
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (195.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-548195 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0127 10:45:44.115202  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:46:11.815437  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-548195 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (3m14.849039259s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (195.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-548195 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-548195 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-548195 -- rollout status deployment/busybox: (4.162542477s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-548195 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-548195 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-548195 -- exec busybox-58667487b6-bbmpv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-548195 -- exec busybox-58667487b6-c76c7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-548195 -- exec busybox-58667487b6-p62hz -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-548195 -- exec busybox-58667487b6-bbmpv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-548195 -- exec busybox-58667487b6-c76c7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-548195 -- exec busybox-58667487b6-p62hz -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-548195 -- exec busybox-58667487b6-bbmpv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-548195 -- exec busybox-58667487b6-c76c7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-548195 -- exec busybox-58667487b6-p62hz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-548195 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-548195 -- exec busybox-58667487b6-bbmpv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-548195 -- exec busybox-58667487b6-bbmpv -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-548195 -- exec busybox-58667487b6-c76c7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-548195 -- exec busybox-58667487b6-c76c7 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-548195 -- exec busybox-58667487b6-p62hz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-548195 -- exec busybox-58667487b6-p62hz -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (53.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-548195 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-548195 -v=7 --alsologtostderr: (52.251093648s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (53.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-548195 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 cp testdata/cp-test.txt ha-548195:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 ssh -n ha-548195 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 cp ha-548195:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2743082545/001/cp-test_ha-548195.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 ssh -n ha-548195 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 cp ha-548195:/home/docker/cp-test.txt ha-548195-m02:/home/docker/cp-test_ha-548195_ha-548195-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 ssh -n ha-548195 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 ssh -n ha-548195-m02 "sudo cat /home/docker/cp-test_ha-548195_ha-548195-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 cp ha-548195:/home/docker/cp-test.txt ha-548195-m03:/home/docker/cp-test_ha-548195_ha-548195-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 ssh -n ha-548195 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 ssh -n ha-548195-m03 "sudo cat /home/docker/cp-test_ha-548195_ha-548195-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 cp ha-548195:/home/docker/cp-test.txt ha-548195-m04:/home/docker/cp-test_ha-548195_ha-548195-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 ssh -n ha-548195 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 ssh -n ha-548195-m04 "sudo cat /home/docker/cp-test_ha-548195_ha-548195-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 cp testdata/cp-test.txt ha-548195-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 ssh -n ha-548195-m02 "sudo cat /home/docker/cp-test.txt"
E0127 10:48:22.836430  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/functional-430173/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:48:22.842860  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/functional-430173/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:48:22.854212  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/functional-430173/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:48:22.875543  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/functional-430173/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 cp ha-548195-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2743082545/001/cp-test_ha-548195-m02.txt
E0127 10:48:22.917348  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/functional-430173/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:48:22.998790  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/functional-430173/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 ssh -n ha-548195-m02 "sudo cat /home/docker/cp-test.txt"
E0127 10:48:23.160671  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/functional-430173/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 cp ha-548195-m02:/home/docker/cp-test.txt ha-548195:/home/docker/cp-test_ha-548195-m02_ha-548195.txt
E0127 10:48:23.482615  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/functional-430173/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 ssh -n ha-548195-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 ssh -n ha-548195 "sudo cat /home/docker/cp-test_ha-548195-m02_ha-548195.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 cp ha-548195-m02:/home/docker/cp-test.txt ha-548195-m03:/home/docker/cp-test_ha-548195-m02_ha-548195-m03.txt
E0127 10:48:24.123997  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/functional-430173/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 ssh -n ha-548195-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 ssh -n ha-548195-m03 "sudo cat /home/docker/cp-test_ha-548195-m02_ha-548195-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 cp ha-548195-m02:/home/docker/cp-test.txt ha-548195-m04:/home/docker/cp-test_ha-548195-m02_ha-548195-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 ssh -n ha-548195-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 ssh -n ha-548195-m04 "sudo cat /home/docker/cp-test_ha-548195-m02_ha-548195-m04.txt"
E0127 10:48:25.405551  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/functional-430173/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 cp testdata/cp-test.txt ha-548195-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 ssh -n ha-548195-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 cp ha-548195-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2743082545/001/cp-test_ha-548195-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 ssh -n ha-548195-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 cp ha-548195-m03:/home/docker/cp-test.txt ha-548195:/home/docker/cp-test_ha-548195-m03_ha-548195.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 ssh -n ha-548195-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 ssh -n ha-548195 "sudo cat /home/docker/cp-test_ha-548195-m03_ha-548195.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 cp ha-548195-m03:/home/docker/cp-test.txt ha-548195-m02:/home/docker/cp-test_ha-548195-m03_ha-548195-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 ssh -n ha-548195-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 ssh -n ha-548195-m02 "sudo cat /home/docker/cp-test_ha-548195-m03_ha-548195-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 cp ha-548195-m03:/home/docker/cp-test.txt ha-548195-m04:/home/docker/cp-test_ha-548195-m03_ha-548195-m04.txt
E0127 10:48:27.967576  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/functional-430173/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 ssh -n ha-548195-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 ssh -n ha-548195-m04 "sudo cat /home/docker/cp-test_ha-548195-m03_ha-548195-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 cp testdata/cp-test.txt ha-548195-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 ssh -n ha-548195-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 cp ha-548195-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2743082545/001/cp-test_ha-548195-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 ssh -n ha-548195-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 cp ha-548195-m04:/home/docker/cp-test.txt ha-548195:/home/docker/cp-test_ha-548195-m04_ha-548195.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 ssh -n ha-548195-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 ssh -n ha-548195 "sudo cat /home/docker/cp-test_ha-548195-m04_ha-548195.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 cp ha-548195-m04:/home/docker/cp-test.txt ha-548195-m02:/home/docker/cp-test_ha-548195-m04_ha-548195-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 ssh -n ha-548195-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 ssh -n ha-548195-m02 "sudo cat /home/docker/cp-test_ha-548195-m04_ha-548195-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 cp ha-548195-m04:/home/docker/cp-test.txt ha-548195-m03:/home/docker/cp-test_ha-548195-m04_ha-548195-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 ssh -n ha-548195-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 ssh -n ha-548195-m03 "sudo cat /home/docker/cp-test_ha-548195-m04_ha-548195-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 node stop m02 -v=7 --alsologtostderr
E0127 10:48:33.089084  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/functional-430173/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:48:43.330626  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/functional-430173/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:49:03.812958  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/functional-430173/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:49:44.774629  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/functional-430173/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-548195 node stop m02 -v=7 --alsologtostderr: (1m30.62820933s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-548195 status -v=7 --alsologtostderr: exit status 7 (658.244334ms)

                                                
                                                
-- stdout --
	ha-548195
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-548195-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-548195-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-548195-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 10:50:02.198606  369493 out.go:345] Setting OutFile to fd 1 ...
	I0127 10:50:02.198747  369493 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 10:50:02.198760  369493 out.go:358] Setting ErrFile to fd 2...
	I0127 10:50:02.198767  369493 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 10:50:02.199012  369493 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-348858/.minikube/bin
	I0127 10:50:02.199276  369493 out.go:352] Setting JSON to false
	I0127 10:50:02.199324  369493 mustload.go:65] Loading cluster: ha-548195
	I0127 10:50:02.199440  369493 notify.go:220] Checking for updates...
	I0127 10:50:02.199936  369493 config.go:182] Loaded profile config "ha-548195": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 10:50:02.199963  369493 status.go:174] checking status of ha-548195 ...
	I0127 10:50:02.200361  369493 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 10:50:02.200404  369493 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:50:02.220192  369493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43059
	I0127 10:50:02.220620  369493 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:50:02.221237  369493 main.go:141] libmachine: Using API Version  1
	I0127 10:50:02.221266  369493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:50:02.221678  369493 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:50:02.221907  369493 main.go:141] libmachine: (ha-548195) Calling .GetState
	I0127 10:50:02.223578  369493 status.go:371] ha-548195 host status = "Running" (err=<nil>)
	I0127 10:50:02.223599  369493 host.go:66] Checking if "ha-548195" exists ...
	I0127 10:50:02.223958  369493 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 10:50:02.224006  369493 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:50:02.238424  369493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36337
	I0127 10:50:02.238767  369493 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:50:02.239197  369493 main.go:141] libmachine: Using API Version  1
	I0127 10:50:02.239216  369493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:50:02.239560  369493 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:50:02.239748  369493 main.go:141] libmachine: (ha-548195) Calling .GetIP
	I0127 10:50:02.242454  369493 main.go:141] libmachine: (ha-548195) DBG | domain ha-548195 has defined MAC address 52:54:00:fb:5e:83 in network mk-ha-548195
	I0127 10:50:02.242890  369493 main.go:141] libmachine: (ha-548195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:5e:83", ip: ""} in network mk-ha-548195: {Iface:virbr1 ExpiryTime:2025-01-27 11:44:16 +0000 UTC Type:0 Mac:52:54:00:fb:5e:83 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-548195 Clientid:01:52:54:00:fb:5e:83}
	I0127 10:50:02.242913  369493 main.go:141] libmachine: (ha-548195) DBG | domain ha-548195 has defined IP address 192.168.39.196 and MAC address 52:54:00:fb:5e:83 in network mk-ha-548195
	I0127 10:50:02.243048  369493 host.go:66] Checking if "ha-548195" exists ...
	I0127 10:50:02.243308  369493 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 10:50:02.243350  369493 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:50:02.257362  369493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38605
	I0127 10:50:02.257829  369493 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:50:02.258242  369493 main.go:141] libmachine: Using API Version  1
	I0127 10:50:02.258265  369493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:50:02.258529  369493 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:50:02.258664  369493 main.go:141] libmachine: (ha-548195) Calling .DriverName
	I0127 10:50:02.258870  369493 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 10:50:02.258911  369493 main.go:141] libmachine: (ha-548195) Calling .GetSSHHostname
	I0127 10:50:02.261364  369493 main.go:141] libmachine: (ha-548195) DBG | domain ha-548195 has defined MAC address 52:54:00:fb:5e:83 in network mk-ha-548195
	I0127 10:50:02.261894  369493 main.go:141] libmachine: (ha-548195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:5e:83", ip: ""} in network mk-ha-548195: {Iface:virbr1 ExpiryTime:2025-01-27 11:44:16 +0000 UTC Type:0 Mac:52:54:00:fb:5e:83 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-548195 Clientid:01:52:54:00:fb:5e:83}
	I0127 10:50:02.261929  369493 main.go:141] libmachine: (ha-548195) DBG | domain ha-548195 has defined IP address 192.168.39.196 and MAC address 52:54:00:fb:5e:83 in network mk-ha-548195
	I0127 10:50:02.262064  369493 main.go:141] libmachine: (ha-548195) Calling .GetSSHPort
	I0127 10:50:02.262223  369493 main.go:141] libmachine: (ha-548195) Calling .GetSSHKeyPath
	I0127 10:50:02.262343  369493 main.go:141] libmachine: (ha-548195) Calling .GetSSHUsername
	I0127 10:50:02.262467  369493 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/ha-548195/id_rsa Username:docker}
	I0127 10:50:02.351905  369493 ssh_runner.go:195] Run: systemctl --version
	I0127 10:50:02.359379  369493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 10:50:02.376420  369493 kubeconfig.go:125] found "ha-548195" server: "https://192.168.39.254:8443"
	I0127 10:50:02.376454  369493 api_server.go:166] Checking apiserver status ...
	I0127 10:50:02.376491  369493 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 10:50:02.391884  369493 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1106/cgroup
	W0127 10:50:02.402855  369493 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1106/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 10:50:02.402898  369493 ssh_runner.go:195] Run: ls
	I0127 10:50:02.407629  369493 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0127 10:50:02.414060  369493 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0127 10:50:02.414081  369493 status.go:463] ha-548195 apiserver status = Running (err=<nil>)
	I0127 10:50:02.414094  369493 status.go:176] ha-548195 status: &{Name:ha-548195 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 10:50:02.414130  369493 status.go:174] checking status of ha-548195-m02 ...
	I0127 10:50:02.414417  369493 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 10:50:02.414462  369493 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:50:02.431005  369493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36627
	I0127 10:50:02.431526  369493 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:50:02.432005  369493 main.go:141] libmachine: Using API Version  1
	I0127 10:50:02.432026  369493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:50:02.432399  369493 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:50:02.432659  369493 main.go:141] libmachine: (ha-548195-m02) Calling .GetState
	I0127 10:50:02.434342  369493 status.go:371] ha-548195-m02 host status = "Stopped" (err=<nil>)
	I0127 10:50:02.434354  369493 status.go:384] host is not running, skipping remaining checks
	I0127 10:50:02.434359  369493 status.go:176] ha-548195-m02 status: &{Name:ha-548195-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 10:50:02.434376  369493 status.go:174] checking status of ha-548195-m03 ...
	I0127 10:50:02.434649  369493 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 10:50:02.434689  369493 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:50:02.449631  369493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39975
	I0127 10:50:02.449979  369493 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:50:02.450400  369493 main.go:141] libmachine: Using API Version  1
	I0127 10:50:02.450423  369493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:50:02.450718  369493 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:50:02.450943  369493 main.go:141] libmachine: (ha-548195-m03) Calling .GetState
	I0127 10:50:02.452458  369493 status.go:371] ha-548195-m03 host status = "Running" (err=<nil>)
	I0127 10:50:02.452475  369493 host.go:66] Checking if "ha-548195-m03" exists ...
	I0127 10:50:02.452782  369493 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 10:50:02.452837  369493 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:50:02.468069  369493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37459
	I0127 10:50:02.468538  369493 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:50:02.469018  369493 main.go:141] libmachine: Using API Version  1
	I0127 10:50:02.469043  369493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:50:02.469407  369493 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:50:02.469613  369493 main.go:141] libmachine: (ha-548195-m03) Calling .GetIP
	I0127 10:50:02.472269  369493 main.go:141] libmachine: (ha-548195-m03) DBG | domain ha-548195-m03 has defined MAC address 52:54:00:1f:e3:9a in network mk-ha-548195
	I0127 10:50:02.472744  369493 main.go:141] libmachine: (ha-548195-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e3:9a", ip: ""} in network mk-ha-548195: {Iface:virbr1 ExpiryTime:2025-01-27 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1f:e3:9a Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-548195-m03 Clientid:01:52:54:00:1f:e3:9a}
	I0127 10:50:02.472763  369493 main.go:141] libmachine: (ha-548195-m03) DBG | domain ha-548195-m03 has defined IP address 192.168.39.142 and MAC address 52:54:00:1f:e3:9a in network mk-ha-548195
	I0127 10:50:02.472924  369493 host.go:66] Checking if "ha-548195-m03" exists ...
	I0127 10:50:02.473334  369493 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 10:50:02.473393  369493 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:50:02.487864  369493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42401
	I0127 10:50:02.488257  369493 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:50:02.488749  369493 main.go:141] libmachine: Using API Version  1
	I0127 10:50:02.488784  369493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:50:02.489088  369493 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:50:02.489304  369493 main.go:141] libmachine: (ha-548195-m03) Calling .DriverName
	I0127 10:50:02.489486  369493 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 10:50:02.489507  369493 main.go:141] libmachine: (ha-548195-m03) Calling .GetSSHHostname
	I0127 10:50:02.492140  369493 main.go:141] libmachine: (ha-548195-m03) DBG | domain ha-548195-m03 has defined MAC address 52:54:00:1f:e3:9a in network mk-ha-548195
	I0127 10:50:02.492526  369493 main.go:141] libmachine: (ha-548195-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e3:9a", ip: ""} in network mk-ha-548195: {Iface:virbr1 ExpiryTime:2025-01-27 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1f:e3:9a Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-548195-m03 Clientid:01:52:54:00:1f:e3:9a}
	I0127 10:50:02.492552  369493 main.go:141] libmachine: (ha-548195-m03) DBG | domain ha-548195-m03 has defined IP address 192.168.39.142 and MAC address 52:54:00:1f:e3:9a in network mk-ha-548195
	I0127 10:50:02.492670  369493 main.go:141] libmachine: (ha-548195-m03) Calling .GetSSHPort
	I0127 10:50:02.492855  369493 main.go:141] libmachine: (ha-548195-m03) Calling .GetSSHKeyPath
	I0127 10:50:02.493025  369493 main.go:141] libmachine: (ha-548195-m03) Calling .GetSSHUsername
	I0127 10:50:02.493149  369493 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/ha-548195-m03/id_rsa Username:docker}
	I0127 10:50:02.584705  369493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 10:50:02.605498  369493 kubeconfig.go:125] found "ha-548195" server: "https://192.168.39.254:8443"
	I0127 10:50:02.605529  369493 api_server.go:166] Checking apiserver status ...
	I0127 10:50:02.605567  369493 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 10:50:02.621103  369493 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup
	W0127 10:50:02.632130  369493 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 10:50:02.632181  369493 ssh_runner.go:195] Run: ls
	I0127 10:50:02.636846  369493 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0127 10:50:02.641867  369493 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0127 10:50:02.641892  369493 status.go:463] ha-548195-m03 apiserver status = Running (err=<nil>)
	I0127 10:50:02.641903  369493 status.go:176] ha-548195-m03 status: &{Name:ha-548195-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 10:50:02.641927  369493 status.go:174] checking status of ha-548195-m04 ...
	I0127 10:50:02.642226  369493 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 10:50:02.642268  369493 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:50:02.658355  369493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39889
	I0127 10:50:02.658730  369493 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:50:02.659196  369493 main.go:141] libmachine: Using API Version  1
	I0127 10:50:02.659215  369493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:50:02.659510  369493 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:50:02.659705  369493 main.go:141] libmachine: (ha-548195-m04) Calling .GetState
	I0127 10:50:02.661141  369493 status.go:371] ha-548195-m04 host status = "Running" (err=<nil>)
	I0127 10:50:02.661159  369493 host.go:66] Checking if "ha-548195-m04" exists ...
	I0127 10:50:02.661516  369493 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 10:50:02.661553  369493 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:50:02.675981  369493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33093
	I0127 10:50:02.676345  369493 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:50:02.676936  369493 main.go:141] libmachine: Using API Version  1
	I0127 10:50:02.676961  369493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:50:02.677283  369493 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:50:02.677536  369493 main.go:141] libmachine: (ha-548195-m04) Calling .GetIP
	I0127 10:50:02.680502  369493 main.go:141] libmachine: (ha-548195-m04) DBG | domain ha-548195-m04 has defined MAC address 52:54:00:7c:3f:90 in network mk-ha-548195
	I0127 10:50:02.680883  369493 main.go:141] libmachine: (ha-548195-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:3f:90", ip: ""} in network mk-ha-548195: {Iface:virbr1 ExpiryTime:2025-01-27 11:47:39 +0000 UTC Type:0 Mac:52:54:00:7c:3f:90 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-548195-m04 Clientid:01:52:54:00:7c:3f:90}
	I0127 10:50:02.680912  369493 main.go:141] libmachine: (ha-548195-m04) DBG | domain ha-548195-m04 has defined IP address 192.168.39.217 and MAC address 52:54:00:7c:3f:90 in network mk-ha-548195
	I0127 10:50:02.681099  369493 host.go:66] Checking if "ha-548195-m04" exists ...
	I0127 10:50:02.681514  369493 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 10:50:02.681560  369493 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:50:02.696306  369493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40749
	I0127 10:50:02.696620  369493 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:50:02.697030  369493 main.go:141] libmachine: Using API Version  1
	I0127 10:50:02.697053  369493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:50:02.697382  369493 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:50:02.697616  369493 main.go:141] libmachine: (ha-548195-m04) Calling .DriverName
	I0127 10:50:02.697796  369493 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 10:50:02.697812  369493 main.go:141] libmachine: (ha-548195-m04) Calling .GetSSHHostname
	I0127 10:50:02.700038  369493 main.go:141] libmachine: (ha-548195-m04) DBG | domain ha-548195-m04 has defined MAC address 52:54:00:7c:3f:90 in network mk-ha-548195
	I0127 10:50:02.700366  369493 main.go:141] libmachine: (ha-548195-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:3f:90", ip: ""} in network mk-ha-548195: {Iface:virbr1 ExpiryTime:2025-01-27 11:47:39 +0000 UTC Type:0 Mac:52:54:00:7c:3f:90 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-548195-m04 Clientid:01:52:54:00:7c:3f:90}
	I0127 10:50:02.700392  369493 main.go:141] libmachine: (ha-548195-m04) DBG | domain ha-548195-m04 has defined IP address 192.168.39.217 and MAC address 52:54:00:7c:3f:90 in network mk-ha-548195
	I0127 10:50:02.700522  369493 main.go:141] libmachine: (ha-548195-m04) Calling .GetSSHPort
	I0127 10:50:02.700695  369493 main.go:141] libmachine: (ha-548195-m04) Calling .GetSSHKeyPath
	I0127 10:50:02.700833  369493 main.go:141] libmachine: (ha-548195-m04) Calling .GetSSHUsername
	I0127 10:50:02.700954  369493 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/ha-548195-m04/id_rsa Username:docker}
	I0127 10:50:02.786605  369493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 10:50:02.802647  369493 status.go:176] ha-548195-m04 status: &{Name:ha-548195-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (38.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-548195 node start m02 -v=7 --alsologtostderr: (37.548973314s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (38.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (441.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-548195 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-548195 -v=7 --alsologtostderr
E0127 10:50:44.114837  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:51:06.696681  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/functional-430173/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:53:22.836848  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/functional-430173/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:53:50.538906  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/functional-430173/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-548195 -v=7 --alsologtostderr: (4m33.643790243s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-548195 --wait=true -v=7 --alsologtostderr
E0127 10:55:44.114685  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:57:07.178002  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-548195 --wait=true -v=7 --alsologtostderr: (2m47.547518758s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-548195
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (441.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (6.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-548195 node delete m03 -v=7 --alsologtostderr: (5.883808875s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (6.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 stop -v=7 --alsologtostderr
E0127 10:58:22.837242  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/functional-430173/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:00:44.114916  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-548195 stop -v=7 --alsologtostderr: (4m31.989785029s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-548195 status -v=7 --alsologtostderr: exit status 7 (101.404663ms)

                                                
                                                
-- stdout --
	ha-548195
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-548195-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-548195-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:02:43.298363  373254 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:02:43.298619  373254 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:02:43.298628  373254 out.go:358] Setting ErrFile to fd 2...
	I0127 11:02:43.298633  373254 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:02:43.298812  373254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-348858/.minikube/bin
	I0127 11:02:43.299004  373254 out.go:352] Setting JSON to false
	I0127 11:02:43.299040  373254 mustload.go:65] Loading cluster: ha-548195
	I0127 11:02:43.299078  373254 notify.go:220] Checking for updates...
	I0127 11:02:43.299414  373254 config.go:182] Loaded profile config "ha-548195": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 11:02:43.299436  373254 status.go:174] checking status of ha-548195 ...
	I0127 11:02:43.299824  373254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:02:43.299868  373254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:02:43.315381  373254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40149
	I0127 11:02:43.315770  373254 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:02:43.316248  373254 main.go:141] libmachine: Using API Version  1
	I0127 11:02:43.316268  373254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:02:43.316634  373254 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:02:43.316851  373254 main.go:141] libmachine: (ha-548195) Calling .GetState
	I0127 11:02:43.318462  373254 status.go:371] ha-548195 host status = "Stopped" (err=<nil>)
	I0127 11:02:43.318477  373254 status.go:384] host is not running, skipping remaining checks
	I0127 11:02:43.318483  373254 status.go:176] ha-548195 status: &{Name:ha-548195 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:02:43.318533  373254 status.go:174] checking status of ha-548195-m02 ...
	I0127 11:02:43.318812  373254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:02:43.318862  373254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:02:43.332860  373254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36461
	I0127 11:02:43.333155  373254 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:02:43.333515  373254 main.go:141] libmachine: Using API Version  1
	I0127 11:02:43.333537  373254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:02:43.333792  373254 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:02:43.333956  373254 main.go:141] libmachine: (ha-548195-m02) Calling .GetState
	I0127 11:02:43.335104  373254 status.go:371] ha-548195-m02 host status = "Stopped" (err=<nil>)
	I0127 11:02:43.335115  373254 status.go:384] host is not running, skipping remaining checks
	I0127 11:02:43.335120  373254 status.go:176] ha-548195-m02 status: &{Name:ha-548195-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:02:43.335131  373254 status.go:174] checking status of ha-548195-m04 ...
	I0127 11:02:43.335369  373254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:02:43.335405  373254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:02:43.349091  373254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41523
	I0127 11:02:43.349516  373254 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:02:43.349959  373254 main.go:141] libmachine: Using API Version  1
	I0127 11:02:43.349985  373254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:02:43.350305  373254 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:02:43.350459  373254 main.go:141] libmachine: (ha-548195-m04) Calling .GetState
	I0127 11:02:43.351769  373254 status.go:371] ha-548195-m04 host status = "Stopped" (err=<nil>)
	I0127 11:02:43.351787  373254 status.go:384] host is not running, skipping remaining checks
	I0127 11:02:43.351794  373254 status.go:176] ha-548195-m04 status: &{Name:ha-548195-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (164.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-548195 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0127 11:03:22.837784  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/functional-430173/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:04:45.901001  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/functional-430173/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-548195 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m43.29862913s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (164.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (73.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-548195 --control-plane -v=7 --alsologtostderr
E0127 11:05:44.115152  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-548195 --control-plane -v=7 --alsologtostderr: (1m12.871707991s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-548195 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (73.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                    
x
+
TestJSONOutput/start/Command (84.24s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-685002 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-685002 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m24.238986814s)
--- PASS: TestJSONOutput/start/Command (84.24s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-685002 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-685002 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.37s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-685002 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-685002 --output=json --user=testUser: (6.370138573s)
--- PASS: TestJSONOutput/stop/Command (6.37s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-182777 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-182777 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (67.507813ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"73ec7b51-31b7-4d0e-82a6-2a21fabb738e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-182777] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"598d726a-0a7c-44ed-83ed-4bf82b4dbd8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20319"}}
	{"specversion":"1.0","id":"4ee0f54c-64ea-488d-9391-c5a4fc0345ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4b221951-cd3c-4f05-b6d3-88e0a65341ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20319-348858/kubeconfig"}}
	{"specversion":"1.0","id":"24a0c702-009e-4ab7-9f88-caf9b176a89c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-348858/.minikube"}}
	{"specversion":"1.0","id":"7b13d44a-14cb-465f-942e-2b4fa82c1c9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"875432dd-3de7-4841-bd5b-48c198aee19f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"bd0c6397-e588-4046-abf4-3e81158d9211","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-182777" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-182777
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (88.62s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-694429 --driver=kvm2  --container-runtime=containerd
E0127 11:08:22.839654  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/functional-430173/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-694429 --driver=kvm2  --container-runtime=containerd: (42.654484597s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-715516 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-715516 --driver=kvm2  --container-runtime=containerd: (43.136916675s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-694429
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-715516
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-715516" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-715516
helpers_test.go:175: Cleaning up "first-694429" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-694429
--- PASS: TestMinikubeProfile (88.62s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (25.92s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-982915 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-982915 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (24.918175797s)
--- PASS: TestMountStart/serial/StartWithMountFirst (25.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-982915 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-982915 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (25.98s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-997114 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-997114 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (24.98248342s)
--- PASS: TestMountStart/serial/StartWithMountSecond (25.98s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-997114 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-997114 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.59s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-982915 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-997114 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-997114 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-997114
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-997114: (1.287581667s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.32s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-997114
E0127 11:10:44.114455  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-997114: (22.316451235s)
--- PASS: TestMountStart/serial/RestartStopped (23.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-997114 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-997114 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (109.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-772042 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-772042 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m49.234335841s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (109.64s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-772042 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-772042 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-772042 -- rollout status deployment/busybox: (2.426907821s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-772042 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-772042 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-772042 -- exec busybox-58667487b6-n5p72 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-772042 -- exec busybox-58667487b6-w5xx5 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-772042 -- exec busybox-58667487b6-n5p72 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-772042 -- exec busybox-58667487b6-w5xx5 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-772042 -- exec busybox-58667487b6-n5p72 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-772042 -- exec busybox-58667487b6-w5xx5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.93s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-772042 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-772042 -- exec busybox-58667487b6-n5p72 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-772042 -- exec busybox-58667487b6-n5p72 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-772042 -- exec busybox-58667487b6-w5xx5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-772042 -- exec busybox-58667487b6-w5xx5 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (48.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-772042 -v 3 --alsologtostderr
E0127 11:13:22.837226  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/functional-430173/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:13:47.180207  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-772042 -v 3 --alsologtostderr: (48.026353603s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (48.58s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-772042 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.57s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 cp testdata/cp-test.txt multinode-772042:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 ssh -n multinode-772042 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 cp multinode-772042:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4237600749/001/cp-test_multinode-772042.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 ssh -n multinode-772042 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 cp multinode-772042:/home/docker/cp-test.txt multinode-772042-m02:/home/docker/cp-test_multinode-772042_multinode-772042-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 ssh -n multinode-772042 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 ssh -n multinode-772042-m02 "sudo cat /home/docker/cp-test_multinode-772042_multinode-772042-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 cp multinode-772042:/home/docker/cp-test.txt multinode-772042-m03:/home/docker/cp-test_multinode-772042_multinode-772042-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 ssh -n multinode-772042 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 ssh -n multinode-772042-m03 "sudo cat /home/docker/cp-test_multinode-772042_multinode-772042-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 cp testdata/cp-test.txt multinode-772042-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 ssh -n multinode-772042-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 cp multinode-772042-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4237600749/001/cp-test_multinode-772042-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 ssh -n multinode-772042-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 cp multinode-772042-m02:/home/docker/cp-test.txt multinode-772042:/home/docker/cp-test_multinode-772042-m02_multinode-772042.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 ssh -n multinode-772042-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 ssh -n multinode-772042 "sudo cat /home/docker/cp-test_multinode-772042-m02_multinode-772042.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 cp multinode-772042-m02:/home/docker/cp-test.txt multinode-772042-m03:/home/docker/cp-test_multinode-772042-m02_multinode-772042-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 ssh -n multinode-772042-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 ssh -n multinode-772042-m03 "sudo cat /home/docker/cp-test_multinode-772042-m02_multinode-772042-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 cp testdata/cp-test.txt multinode-772042-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 ssh -n multinode-772042-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 cp multinode-772042-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4237600749/001/cp-test_multinode-772042-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 ssh -n multinode-772042-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 cp multinode-772042-m03:/home/docker/cp-test.txt multinode-772042:/home/docker/cp-test_multinode-772042-m03_multinode-772042.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 ssh -n multinode-772042-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 ssh -n multinode-772042 "sudo cat /home/docker/cp-test_multinode-772042-m03_multinode-772042.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 cp multinode-772042-m03:/home/docker/cp-test.txt multinode-772042-m02:/home/docker/cp-test_multinode-772042-m03_multinode-772042-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 ssh -n multinode-772042-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 ssh -n multinode-772042-m02 "sudo cat /home/docker/cp-test_multinode-772042-m03_multinode-772042-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.15s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-772042 node stop m03: (1.285358578s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-772042 status: exit status 7 (419.423533ms)

                                                
                                                
-- stdout --
	multinode-772042
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-772042-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-772042-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-772042 status --alsologtostderr: exit status 7 (416.703931ms)

                                                
                                                
-- stdout --
	multinode-772042
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-772042-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-772042-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:13:58.721759  380766 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:13:58.721998  380766 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:13:58.722008  380766 out.go:358] Setting ErrFile to fd 2...
	I0127 11:13:58.722014  380766 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:13:58.722188  380766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-348858/.minikube/bin
	I0127 11:13:58.722371  380766 out.go:352] Setting JSON to false
	I0127 11:13:58.722416  380766 mustload.go:65] Loading cluster: multinode-772042
	I0127 11:13:58.722497  380766 notify.go:220] Checking for updates...
	I0127 11:13:58.722923  380766 config.go:182] Loaded profile config "multinode-772042": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 11:13:58.722950  380766 status.go:174] checking status of multinode-772042 ...
	I0127 11:13:58.723447  380766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:13:58.723496  380766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:13:58.739735  380766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34547
	I0127 11:13:58.740125  380766 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:13:58.740754  380766 main.go:141] libmachine: Using API Version  1
	I0127 11:13:58.740779  380766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:13:58.741073  380766 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:13:58.741294  380766 main.go:141] libmachine: (multinode-772042) Calling .GetState
	I0127 11:13:58.742752  380766 status.go:371] multinode-772042 host status = "Running" (err=<nil>)
	I0127 11:13:58.742765  380766 host.go:66] Checking if "multinode-772042" exists ...
	I0127 11:13:58.743002  380766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:13:58.743033  380766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:13:58.757201  380766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38019
	I0127 11:13:58.757632  380766 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:13:58.758103  380766 main.go:141] libmachine: Using API Version  1
	I0127 11:13:58.758129  380766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:13:58.758426  380766 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:13:58.758608  380766 main.go:141] libmachine: (multinode-772042) Calling .GetIP
	I0127 11:13:58.761043  380766 main.go:141] libmachine: (multinode-772042) DBG | domain multinode-772042 has defined MAC address 52:54:00:6a:64:4b in network mk-multinode-772042
	I0127 11:13:58.761425  380766 main.go:141] libmachine: (multinode-772042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:64:4b", ip: ""} in network mk-multinode-772042: {Iface:virbr1 ExpiryTime:2025-01-27 12:11:20 +0000 UTC Type:0 Mac:52:54:00:6a:64:4b Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:multinode-772042 Clientid:01:52:54:00:6a:64:4b}
	I0127 11:13:58.761448  380766 main.go:141] libmachine: (multinode-772042) DBG | domain multinode-772042 has defined IP address 192.168.39.173 and MAC address 52:54:00:6a:64:4b in network mk-multinode-772042
	I0127 11:13:58.761630  380766 host.go:66] Checking if "multinode-772042" exists ...
	I0127 11:13:58.761950  380766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:13:58.761985  380766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:13:58.776060  380766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36661
	I0127 11:13:58.776406  380766 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:13:58.776828  380766 main.go:141] libmachine: Using API Version  1
	I0127 11:13:58.776849  380766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:13:58.777099  380766 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:13:58.777301  380766 main.go:141] libmachine: (multinode-772042) Calling .DriverName
	I0127 11:13:58.777466  380766 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 11:13:58.777487  380766 main.go:141] libmachine: (multinode-772042) Calling .GetSSHHostname
	I0127 11:13:58.779920  380766 main.go:141] libmachine: (multinode-772042) DBG | domain multinode-772042 has defined MAC address 52:54:00:6a:64:4b in network mk-multinode-772042
	I0127 11:13:58.780258  380766 main.go:141] libmachine: (multinode-772042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:64:4b", ip: ""} in network mk-multinode-772042: {Iface:virbr1 ExpiryTime:2025-01-27 12:11:20 +0000 UTC Type:0 Mac:52:54:00:6a:64:4b Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:multinode-772042 Clientid:01:52:54:00:6a:64:4b}
	I0127 11:13:58.780286  380766 main.go:141] libmachine: (multinode-772042) DBG | domain multinode-772042 has defined IP address 192.168.39.173 and MAC address 52:54:00:6a:64:4b in network mk-multinode-772042
	I0127 11:13:58.780399  380766 main.go:141] libmachine: (multinode-772042) Calling .GetSSHPort
	I0127 11:13:58.780566  380766 main.go:141] libmachine: (multinode-772042) Calling .GetSSHKeyPath
	I0127 11:13:58.780715  380766 main.go:141] libmachine: (multinode-772042) Calling .GetSSHUsername
	I0127 11:13:58.780849  380766 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/multinode-772042/id_rsa Username:docker}
	I0127 11:13:58.861137  380766 ssh_runner.go:195] Run: systemctl --version
	I0127 11:13:58.867247  380766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:13:58.883045  380766 kubeconfig.go:125] found "multinode-772042" server: "https://192.168.39.173:8443"
	I0127 11:13:58.883074  380766 api_server.go:166] Checking apiserver status ...
	I0127 11:13:58.883102  380766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:13:58.897618  380766 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1078/cgroup
	W0127 11:13:58.907524  380766 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1078/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 11:13:58.907554  380766 ssh_runner.go:195] Run: ls
	I0127 11:13:58.911879  380766 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0127 11:13:58.916509  380766 api_server.go:279] https://192.168.39.173:8443/healthz returned 200:
	ok
	I0127 11:13:58.916535  380766 status.go:463] multinode-772042 apiserver status = Running (err=<nil>)
	I0127 11:13:58.916548  380766 status.go:176] multinode-772042 status: &{Name:multinode-772042 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:13:58.916571  380766 status.go:174] checking status of multinode-772042-m02 ...
	I0127 11:13:58.916865  380766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:13:58.916898  380766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:13:58.932471  380766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41967
	I0127 11:13:58.932837  380766 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:13:58.933275  380766 main.go:141] libmachine: Using API Version  1
	I0127 11:13:58.933297  380766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:13:58.933636  380766 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:13:58.933824  380766 main.go:141] libmachine: (multinode-772042-m02) Calling .GetState
	I0127 11:13:58.935265  380766 status.go:371] multinode-772042-m02 host status = "Running" (err=<nil>)
	I0127 11:13:58.935282  380766 host.go:66] Checking if "multinode-772042-m02" exists ...
	I0127 11:13:58.935658  380766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:13:58.935702  380766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:13:58.950089  380766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39801
	I0127 11:13:58.950424  380766 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:13:58.950849  380766 main.go:141] libmachine: Using API Version  1
	I0127 11:13:58.950874  380766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:13:58.951170  380766 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:13:58.951347  380766 main.go:141] libmachine: (multinode-772042-m02) Calling .GetIP
	I0127 11:13:58.953721  380766 main.go:141] libmachine: (multinode-772042-m02) DBG | domain multinode-772042-m02 has defined MAC address 52:54:00:06:fa:70 in network mk-multinode-772042
	I0127 11:13:58.954093  380766 main.go:141] libmachine: (multinode-772042-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:fa:70", ip: ""} in network mk-multinode-772042: {Iface:virbr1 ExpiryTime:2025-01-27 12:12:19 +0000 UTC Type:0 Mac:52:54:00:06:fa:70 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:multinode-772042-m02 Clientid:01:52:54:00:06:fa:70}
	I0127 11:13:58.954120  380766 main.go:141] libmachine: (multinode-772042-m02) DBG | domain multinode-772042-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:06:fa:70 in network mk-multinode-772042
	I0127 11:13:58.954294  380766 host.go:66] Checking if "multinode-772042-m02" exists ...
	I0127 11:13:58.954681  380766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:13:58.954724  380766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:13:58.970517  380766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36609
	I0127 11:13:58.970856  380766 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:13:58.971358  380766 main.go:141] libmachine: Using API Version  1
	I0127 11:13:58.971379  380766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:13:58.971671  380766 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:13:58.971845  380766 main.go:141] libmachine: (multinode-772042-m02) Calling .DriverName
	I0127 11:13:58.971991  380766 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 11:13:58.972012  380766 main.go:141] libmachine: (multinode-772042-m02) Calling .GetSSHHostname
	I0127 11:13:58.975002  380766 main.go:141] libmachine: (multinode-772042-m02) DBG | domain multinode-772042-m02 has defined MAC address 52:54:00:06:fa:70 in network mk-multinode-772042
	I0127 11:13:58.975444  380766 main.go:141] libmachine: (multinode-772042-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:fa:70", ip: ""} in network mk-multinode-772042: {Iface:virbr1 ExpiryTime:2025-01-27 12:12:19 +0000 UTC Type:0 Mac:52:54:00:06:fa:70 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:multinode-772042-m02 Clientid:01:52:54:00:06:fa:70}
	I0127 11:13:58.975476  380766 main.go:141] libmachine: (multinode-772042-m02) DBG | domain multinode-772042-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:06:fa:70 in network mk-multinode-772042
	I0127 11:13:58.975702  380766 main.go:141] libmachine: (multinode-772042-m02) Calling .GetSSHPort
	I0127 11:13:58.975859  380766 main.go:141] libmachine: (multinode-772042-m02) Calling .GetSSHKeyPath
	I0127 11:13:58.976012  380766 main.go:141] libmachine: (multinode-772042-m02) Calling .GetSSHUsername
	I0127 11:13:58.976114  380766 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/multinode-772042-m02/id_rsa Username:docker}
	I0127 11:13:59.056447  380766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:13:59.072013  380766 status.go:176] multinode-772042-m02 status: &{Name:multinode-772042-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:13:59.072038  380766 status.go:174] checking status of multinode-772042-m03 ...
	I0127 11:13:59.072390  380766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:13:59.072434  380766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:13:59.087841  380766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44935
	I0127 11:13:59.088281  380766 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:13:59.088736  380766 main.go:141] libmachine: Using API Version  1
	I0127 11:13:59.088754  380766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:13:59.089057  380766 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:13:59.089229  380766 main.go:141] libmachine: (multinode-772042-m03) Calling .GetState
	I0127 11:13:59.090606  380766 status.go:371] multinode-772042-m03 host status = "Stopped" (err=<nil>)
	I0127 11:13:59.090621  380766 status.go:384] host is not running, skipping remaining checks
	I0127 11:13:59.090628  380766 status.go:176] multinode-772042-m03 status: &{Name:multinode-772042-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.12s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (33.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-772042 node start m03 -v=7 --alsologtostderr: (32.786418394s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (33.41s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (324.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-772042
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-772042
E0127 11:15:44.115171  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-772042: (3m2.813245403s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-772042 --wait=true -v=8 --alsologtostderr
E0127 11:18:22.836722  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/functional-430173/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-772042 --wait=true -v=8 --alsologtostderr: (2m21.220027864s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-772042
--- PASS: TestMultiNode/serial/RestartKeepsNodes (324.13s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-772042 node delete m03: (1.511474932s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.03s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (181.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 stop
E0127 11:20:44.114417  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:21:25.904274  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/functional-430173/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-772042 stop: (3m1.583400794s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-772042 status: exit status 7 (85.484538ms)

                                                
                                                
-- stdout --
	multinode-772042
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-772042-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-772042 status --alsologtostderr: exit status 7 (81.83161ms)

                                                
                                                
-- stdout --
	multinode-772042
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-772042-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:23:00.375482  383850 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:23:00.375587  383850 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:23:00.375596  383850 out.go:358] Setting ErrFile to fd 2...
	I0127 11:23:00.375599  383850 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:23:00.375750  383850 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-348858/.minikube/bin
	I0127 11:23:00.375898  383850 out.go:352] Setting JSON to false
	I0127 11:23:00.375925  383850 mustload.go:65] Loading cluster: multinode-772042
	I0127 11:23:00.376067  383850 notify.go:220] Checking for updates...
	I0127 11:23:00.376318  383850 config.go:182] Loaded profile config "multinode-772042": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 11:23:00.376337  383850 status.go:174] checking status of multinode-772042 ...
	I0127 11:23:00.376730  383850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:23:00.376763  383850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:23:00.392039  383850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37281
	I0127 11:23:00.392476  383850 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:23:00.393107  383850 main.go:141] libmachine: Using API Version  1
	I0127 11:23:00.393144  383850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:23:00.393471  383850 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:23:00.393671  383850 main.go:141] libmachine: (multinode-772042) Calling .GetState
	I0127 11:23:00.395134  383850 status.go:371] multinode-772042 host status = "Stopped" (err=<nil>)
	I0127 11:23:00.395147  383850 status.go:384] host is not running, skipping remaining checks
	I0127 11:23:00.395151  383850 status.go:176] multinode-772042 status: &{Name:multinode-772042 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:23:00.395172  383850 status.go:174] checking status of multinode-772042-m02 ...
	I0127 11:23:00.395443  383850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:23:00.395475  383850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:23:00.409311  383850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35147
	I0127 11:23:00.409706  383850 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:23:00.410114  383850 main.go:141] libmachine: Using API Version  1
	I0127 11:23:00.410139  383850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:23:00.410429  383850 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:23:00.410604  383850 main.go:141] libmachine: (multinode-772042-m02) Calling .GetState
	I0127 11:23:00.412040  383850 status.go:371] multinode-772042-m02 host status = "Stopped" (err=<nil>)
	I0127 11:23:00.412051  383850 status.go:384] host is not running, skipping remaining checks
	I0127 11:23:00.412055  383850 status.go:176] multinode-772042-m02 status: &{Name:multinode-772042-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (181.75s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (107.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-772042 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0127 11:23:22.836641  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/functional-430173/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-772042 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m46.553615083s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-772042 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (107.09s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-772042
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-772042-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-772042-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (63.320283ms)

                                                
                                                
-- stdout --
	* [multinode-772042-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20319-348858/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-348858/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-772042-m02' is duplicated with machine name 'multinode-772042-m02' in profile 'multinode-772042'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-772042-m03 --driver=kvm2  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-772042-m03 --driver=kvm2  --container-runtime=containerd: (42.445903067s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-772042
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-772042: exit status 80 (210.423196ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-772042 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-772042-m03 already exists in multinode-772042-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-772042-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.48s)

                                                
                                    
x
+
TestPreload (255.13s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-416632 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0127 11:25:44.114433  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-416632 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m56.268543519s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-416632 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-416632 image pull gcr.io/k8s-minikube/busybox: (1.828233627s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-416632
E0127 11:28:22.843831  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/functional-430173/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-416632: (1m30.820668174s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-416632 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-416632 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (44.993412675s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-416632 image list
helpers_test.go:175: Cleaning up "test-preload-416632" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-416632
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-416632: (1.008488966s)
--- PASS: TestPreload (255.13s)

                                                
                                    
x
+
TestScheduledStopUnix (116.91s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-645751 --memory=2048 --driver=kvm2  --container-runtime=containerd
E0127 11:30:27.183676  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-645751 --memory=2048 --driver=kvm2  --container-runtime=containerd: (45.269285257s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-645751 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-645751 -n scheduled-stop-645751
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-645751 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0127 11:30:33.068288  356204 retry.go:31] will retry after 84.846µs: open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/scheduled-stop-645751/pid: no such file or directory
I0127 11:30:33.069442  356204 retry.go:31] will retry after 177.139µs: open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/scheduled-stop-645751/pid: no such file or directory
I0127 11:30:33.070579  356204 retry.go:31] will retry after 202.14µs: open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/scheduled-stop-645751/pid: no such file or directory
I0127 11:30:33.071714  356204 retry.go:31] will retry after 207.046µs: open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/scheduled-stop-645751/pid: no such file or directory
I0127 11:30:33.072838  356204 retry.go:31] will retry after 462.905µs: open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/scheduled-stop-645751/pid: no such file or directory
I0127 11:30:33.073964  356204 retry.go:31] will retry after 583.087µs: open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/scheduled-stop-645751/pid: no such file or directory
I0127 11:30:33.075079  356204 retry.go:31] will retry after 838.771µs: open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/scheduled-stop-645751/pid: no such file or directory
I0127 11:30:33.076213  356204 retry.go:31] will retry after 1.891827ms: open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/scheduled-stop-645751/pid: no such file or directory
I0127 11:30:33.078479  356204 retry.go:31] will retry after 2.41982ms: open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/scheduled-stop-645751/pid: no such file or directory
I0127 11:30:33.081696  356204 retry.go:31] will retry after 5.555446ms: open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/scheduled-stop-645751/pid: no such file or directory
I0127 11:30:33.087955  356204 retry.go:31] will retry after 7.544956ms: open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/scheduled-stop-645751/pid: no such file or directory
I0127 11:30:33.096181  356204 retry.go:31] will retry after 8.800094ms: open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/scheduled-stop-645751/pid: no such file or directory
I0127 11:30:33.105368  356204 retry.go:31] will retry after 18.20242ms: open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/scheduled-stop-645751/pid: no such file or directory
I0127 11:30:33.124588  356204 retry.go:31] will retry after 25.466436ms: open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/scheduled-stop-645751/pid: no such file or directory
I0127 11:30:33.150812  356204 retry.go:31] will retry after 39.280168ms: open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/scheduled-stop-645751/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-645751 --cancel-scheduled
E0127 11:30:44.114660  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-645751 -n scheduled-stop-645751
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-645751
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-645751 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-645751
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-645751: exit status 7 (68.153135ms)

                                                
                                                
-- stdout --
	scheduled-stop-645751
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-645751 -n scheduled-stop-645751
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-645751 -n scheduled-stop-645751: exit status 7 (66.016968ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-645751" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-645751
--- PASS: TestScheduledStopUnix (116.91s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (249.53s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.609032921 start -p running-upgrade-948719 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.609032921 start -p running-upgrade-948719 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m11.217002054s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-948719 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-948719 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m56.479704062s)
helpers_test.go:175: Cleaning up "running-upgrade-948719" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-948719
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-948719: (1.061129807s)
--- PASS: TestRunningBinaryUpgrade (249.53s)

                                                
                                    
x
+
TestKubernetesUpgrade (195.56s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-469408 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-469408 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m23.922408923s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-469408
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-469408: (1.628067045s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-469408 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-469408 status --format={{.Host}}: exit status 7 (73.008398ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-469408 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0127 11:38:05.906687  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/functional-430173/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-469408 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (51.825338464s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-469408 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-469408 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-469408 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (83.957338ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-469408] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20319-348858/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-348858/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-469408
	    minikube start -p kubernetes-upgrade-469408 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4694082 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-469408 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-469408 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-469408 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (56.765125327s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-469408" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-469408
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-469408: (1.203411951s)
--- PASS: TestKubernetesUpgrade (195.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-935083 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-935083 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (74.274898ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-935083] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20319-348858/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-348858/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (93.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-935083 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-935083 --driver=kvm2  --container-runtime=containerd: (1m33.612180181s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-935083 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (93.85s)

                                                
                                    
x
+
TestPause/serial/Start (110.91s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-075456 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-075456 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m50.912411666s)
--- PASS: TestPause/serial/Start (110.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (82.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-935083 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0127 11:33:22.837729  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/functional-430173/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-935083 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (1m21.52610949s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-935083 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-935083 status -o json: exit status 2 (223.190655ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-935083","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-935083
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (82.64s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (46.23s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-075456 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-075456 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (46.21272959s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (46.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (25.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-935083 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-935083 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (25.60876157s)
--- PASS: TestNoKubernetes/serial/Start (25.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-935083 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-935083 "sudo systemctl is-active --quiet service kubelet": exit status 1 (212.975462ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (32.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (15.196918598s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (17.371857983s)
--- PASS: TestNoKubernetes/serial/ProfileList (32.57s)

                                                
                                    
x
+
TestPause/serial/Pause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-075456 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.69s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.24s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-075456 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-075456 --output=json --layout=cluster: exit status 2 (242.577217ms)

                                                
                                                
-- stdout --
	{"Name":"pause-075456","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-075456","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.24s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.59s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-075456 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.59s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.71s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-075456 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.71s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.67s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-075456 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.67s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (14.89s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (14.887332913s)
--- PASS: TestPause/serial/VerifyDeletedResources (14.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-935083
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-935083: (1.300390202s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (43.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-935083 --driver=kvm2  --container-runtime=containerd
E0127 11:35:44.114558  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-935083 --driver=kvm2  --container-runtime=containerd: (43.541752647s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (43.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-935083 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-935083 "sudo systemctl is-active --quiet service kubelet": exit status 1 (213.35416ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (163.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1871642620 start -p stopped-upgrade-559730 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1871642620 start -p stopped-upgrade-559730 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m44.786857366s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1871642620 -p stopped-upgrade-559730 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1871642620 -p stopped-upgrade-559730 stop: (1.229531238s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-559730 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0127 11:38:22.837754  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/functional-430173/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-559730 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (57.044404635s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (163.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (183.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-705124 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-705124 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (3m3.550178748s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (183.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (109.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-976043 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-976043 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (1m49.716896865s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (109.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-559730
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (84.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-206954 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-206954 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (1m24.594651565s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (84.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-259716 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-259716 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (1m22.926830214s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-705124 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b32eab10-d663-4c31-862a-c982e527818e] Pending
helpers_test.go:344: "busybox" [b32eab10-d663-4c31-862a-c982e527818e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b32eab10-d663-4c31-862a-c982e527818e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004687398s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-705124 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-705124 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-705124 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-976043 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a41f0694-440b-4f3a-b150-bca950a88c77] Pending
helpers_test.go:344: "busybox" [a41f0694-440b-4f3a-b150-bca950a88c77] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a41f0694-440b-4f3a-b150-bca950a88c77] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004639673s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-976043 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (90.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-705124 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-705124 --alsologtostderr -v=3: (1m30.669864908s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (90.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-976043 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-976043 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (90.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-976043 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-976043 --alsologtostderr -v=3: (1m30.860240301s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (90.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-206954 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5732fa04-50d4-46c0-98bf-90ace62c1743] Pending
helpers_test.go:344: "busybox" [5732fa04-50d4-46c0-98bf-90ace62c1743] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5732fa04-50d4-46c0-98bf-90ace62c1743] Running
E0127 11:40:44.114237  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003959277s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-206954 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-206954 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-206954 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (90.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-206954 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-206954 --alsologtostderr -v=3: (1m30.867646888s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (90.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-259716 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [628892b4-8aea-4d19-a585-8acdce61427b] Pending
helpers_test.go:344: "busybox" [628892b4-8aea-4d19-a585-8acdce61427b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [628892b4-8aea-4d19-a585-8acdce61427b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003931174s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-259716 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-259716 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-259716 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-259716 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-259716 --alsologtostderr -v=3: (1m31.065339986s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-705124 -n old-k8s-version-705124
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-705124 -n old-k8s-version-705124: exit status 7 (62.626141ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-705124 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (160.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-705124 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-705124 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m40.672473964s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-705124 -n old-k8s-version-705124
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (160.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-976043 -n no-preload-976043
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-976043 -n no-preload-976043: exit status 7 (77.545749ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-976043 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-206954 -n embed-certs-206954
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-206954 -n embed-certs-206954: exit status 7 (83.63655ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-206954 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (308.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-206954 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-206954 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (5m8.292529604s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-206954 -n embed-certs-206954
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (308.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-259716 -n default-k8s-diff-port-259716
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-259716 -n default-k8s-diff-port-259716: exit status 7 (86.714801ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-259716 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-4lxrs" [22f5fae5-b50c-4c5a-a65b-4da63e7bb9dc] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004094181s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-4lxrs" [22f5fae5-b50c-4c5a-a65b-4da63e7bb9dc] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003880414s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-705124 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-705124 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-705124 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-705124 -n old-k8s-version-705124
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-705124 -n old-k8s-version-705124: exit status 2 (247.667004ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-705124 -n old-k8s-version-705124
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-705124 -n old-k8s-version-705124: exit status 2 (238.133365ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-705124 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-705124 -n old-k8s-version-705124
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-705124 -n old-k8s-version-705124
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (51.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-494521 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 11:45:13.603091  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/old-k8s-version-705124/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:45:13.609607  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/old-k8s-version-705124/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:45:13.620988  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/old-k8s-version-705124/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:45:13.642404  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/old-k8s-version-705124/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:45:13.683763  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/old-k8s-version-705124/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:45:13.765160  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/old-k8s-version-705124/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:45:13.926646  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/old-k8s-version-705124/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:45:14.248441  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/old-k8s-version-705124/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:45:14.890157  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/old-k8s-version-705124/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:45:16.172404  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/old-k8s-version-705124/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:45:18.734636  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/old-k8s-version-705124/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:45:23.856733  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/old-k8s-version-705124/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:45:34.098822  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/old-k8s-version-705124/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-494521 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (51.481548095s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (51.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-494521 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-494521 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.090303464s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-494521 --alsologtostderr -v=3
E0127 11:45:44.115211  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/addons-245022/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-494521 --alsologtostderr -v=3: (2.29718802s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-494521 -n newest-cni-494521
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-494521 -n newest-cni-494521: exit status 7 (72.08009ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-494521 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (33.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-494521 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 11:45:54.580373  356204 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/old-k8s-version-705124/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-494521 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (32.792070723s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-494521 -n newest-cni-494521
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (33.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-494521 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-494521 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-494521 -n newest-cni-494521
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-494521 -n newest-cni-494521: exit status 2 (243.381534ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-494521 -n newest-cni-494521
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-494521 -n newest-cni-494521: exit status 2 (242.568635ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-494521 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-494521 -n newest-cni-494521
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-494521 -n newest-cni-494521
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-ct2jf" [4f7e0597-c15f-4105-ac21-f2cf90de9a2d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004746552s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-ct2jf" [4f7e0597-c15f-4105-ac21-f2cf90de9a2d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004728529s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-206954 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-206954 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-206954 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-206954 -n embed-certs-206954
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-206954 -n embed-certs-206954: exit status 2 (299.449918ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-206954 -n embed-certs-206954
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-206954 -n embed-certs-206954: exit status 2 (282.928585ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-206954 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-206954 -n embed-certs-206954
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-206954 -n embed-certs-206954
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.00s)

                                                
                                    

Test skip (36/272)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.1/cached-images 0
15 TestDownloadOnly/v1.32.1/binaries 0
16 TestDownloadOnly/v1.32.1/kubectl 0
20 TestDownloadOnlyKic 0
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
119 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
122 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestGvisorAddon 0
178 TestImageBuild 0
205 TestKicCustomNetwork 0
206 TestKicExistingNetwork 0
207 TestKicCustomSubnet 0
208 TestKicStaticIP 0
240 TestChangeNoneUser 0
243 TestScheduledStopWindows 0
245 TestSkaffold 0
247 TestInsufficientStorage 0
251 TestMissingContainerUpgrade 0
259 TestStartStop/group/disable-driver-mounts 0.21
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-785589" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-785589
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
Copied to clipboard