Test Report: KVM_Linux_containerd 20151

                    
                      33072eff0e89b858b45dc04bb45c552eedaf3583:2025-01-20:37991
                    
                

Test fail (2/320)

Order failed test Duration
310 TestStartStop/group/no-preload/serial/SecondStart 1540.68
335 TestStartStop/group/embed-certs/serial/SecondStart 1639.59
x
+
TestStartStop/group/no-preload/serial/SecondStart (1540.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-677886 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-677886 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.0: signal: killed (25m38.549905556s)

                                                
                                                
-- stdout --
	* [no-preload-677886] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20151
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20151-530330/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-530330/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "no-preload-677886" primary control-plane node in "no-preload-677886" cluster
	* Restarting existing kvm2 VM for "no-preload-677886" ...
	* Preparing Kubernetes v1.32.0 on containerd 1.7.23 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-677886 addons enable metrics-server
	
	* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 12:24:56.456694  580663 out.go:345] Setting OutFile to fd 1 ...
	I0120 12:24:56.456807  580663 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:24:56.456819  580663 out.go:358] Setting ErrFile to fd 2...
	I0120 12:24:56.456825  580663 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:24:56.457135  580663 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-530330/.minikube/bin
	I0120 12:24:56.457912  580663 out.go:352] Setting JSON to false
	I0120 12:24:56.459154  580663 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":7638,"bootTime":1737368258,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 12:24:56.459293  580663 start.go:139] virtualization: kvm guest
	I0120 12:24:56.462566  580663 out.go:177] * [no-preload-677886] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 12:24:56.464284  580663 notify.go:220] Checking for updates...
	I0120 12:24:56.464318  580663 out.go:177]   - MINIKUBE_LOCATION=20151
	I0120 12:24:56.465942  580663 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 12:24:56.467512  580663 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20151-530330/kubeconfig
	I0120 12:24:56.469186  580663 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-530330/.minikube
	I0120 12:24:56.471016  580663 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 12:24:56.472494  580663 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 12:24:56.474747  580663 config.go:182] Loaded profile config "no-preload-677886": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 12:24:56.475419  580663 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:24:56.475515  580663 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:24:56.496824  580663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36889
	I0120 12:24:56.497392  580663 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:24:56.498149  580663 main.go:141] libmachine: Using API Version  1
	I0120 12:24:56.498177  580663 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:24:56.498597  580663 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:24:56.498857  580663 main.go:141] libmachine: (no-preload-677886) Calling .DriverName
	I0120 12:24:56.499148  580663 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 12:24:56.499492  580663 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:24:56.499559  580663 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:24:56.516567  580663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39129
	I0120 12:24:56.517028  580663 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:24:56.517699  580663 main.go:141] libmachine: Using API Version  1
	I0120 12:24:56.517733  580663 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:24:56.518096  580663 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:24:56.518340  580663 main.go:141] libmachine: (no-preload-677886) Calling .DriverName
	I0120 12:24:56.563618  580663 out.go:177] * Using the kvm2 driver based on existing profile
	I0120 12:24:56.565156  580663 start.go:297] selected driver: kvm2
	I0120 12:24:56.565183  580663 start.go:901] validating driver "kvm2" against &{Name:no-preload-677886 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:no-preload-677886 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.157 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:24:56.565401  580663 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 12:24:56.566509  580663 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:24:56.566612  580663 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20151-530330/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 12:24:56.585311  580663 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 12:24:56.585967  580663 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 12:24:56.586027  580663 cni.go:84] Creating CNI manager for ""
	I0120 12:24:56.586110  580663 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0120 12:24:56.586173  580663 start.go:340] cluster config:
	{Name:no-preload-677886 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:no-preload-677886 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.157 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-hos
t Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:24:56.586332  580663 iso.go:125] acquiring lock: {Name:mk734d848ce0e9a68d8d00ecbd0f5085f599b42f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:24:56.588434  580663 out.go:177] * Starting "no-preload-677886" primary control-plane node in "no-preload-677886" cluster
	I0120 12:24:56.589859  580663 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
	I0120 12:24:56.590048  580663 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/no-preload-677886/config.json ...
	I0120 12:24:56.590096  580663 cache.go:107] acquiring lock: {Name:mkb50d5c4959af228c3f0e841267fc713f5657bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:24:56.590106  580663 cache.go:107] acquiring lock: {Name:mk7743765bee0171fb8408c07ab96f967c01da33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:24:56.590194  580663 cache.go:107] acquiring lock: {Name:mkdd6761dcff9cb317bee6a39867dd9f91a1c9d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:24:56.590199  580663 cache.go:107] acquiring lock: {Name:mkc3dcde5042d302783249c200b73a28b4207bfc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:24:56.590260  580663 cache.go:107] acquiring lock: {Name:mk801d27d0882d516653d3fd5808264aae328741 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:24:56.590306  580663 cache.go:107] acquiring lock: {Name:mkcf6886d16e7a92b8a48ad7cc85e0173f8a2af5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:24:56.590342  580663 cache.go:115] /home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.0 exists
	I0120 12:24:56.590342  580663 start.go:360] acquireMachinesLock for no-preload-677886: {Name:mkcd5f2d114897136dd2343f9fcf468e718657e6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 12:24:56.590353  580663 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.0" -> "/home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.0" took 96.435µs
	I0120 12:24:56.590362  580663 cache.go:115] /home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.0 exists
	I0120 12:24:56.590374  580663 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.0 -> /home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.0 succeeded
	I0120 12:24:56.590373  580663 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.0" -> "/home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.0" took 278.104µs
	I0120 12:24:56.590289  580663 cache.go:115] /home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.0 exists
	I0120 12:24:56.590386  580663 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.0 -> /home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.0 succeeded
	I0120 12:24:56.590354  580663 cache.go:115] /home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0120 12:24:56.590391  580663 start.go:364] duration metric: took 27.887µs to acquireMachinesLock for "no-preload-677886"
	I0120 12:24:56.590392  580663 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.0" -> "/home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.0" took 198.872µs
	I0120 12:24:56.590399  580663 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 96.603µs
	I0120 12:24:56.590270  580663 cache.go:107] acquiring lock: {Name:mk8225973acaf0d36eacdfb4eba92b0ed26bdad9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:24:56.590407  580663 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0120 12:24:56.590409  580663 start.go:96] Skipping create...Using existing machine configuration
	I0120 12:24:56.590418  580663 fix.go:54] fixHost starting: 
	I0120 12:24:56.590402  580663 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.0 -> /home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.0 succeeded
	I0120 12:24:56.590450  580663 cache.go:115] /home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0120 12:24:56.590457  580663 cache.go:115] /home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0120 12:24:56.590440  580663 cache.go:107] acquiring lock: {Name:mkb7510ccea43e6b11ab4abd1910eac7e5808368 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:24:56.590475  580663 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 388.372µs
	I0120 12:24:56.590497  580663 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0120 12:24:56.590461  580663 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 195.047µs
	I0120 12:24:56.590509  580663 cache.go:115] /home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.0 exists
	I0120 12:24:56.590513  580663 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0120 12:24:56.590491  580663 cache.go:115] /home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 exists
	I0120 12:24:56.590519  580663 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.0" -> "/home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.0" took 97.717µs
	I0120 12:24:56.590528  580663 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.0 -> /home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.0 succeeded
	I0120 12:24:56.590541  580663 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0" took 351.851µs
	I0120 12:24:56.590559  580663 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 succeeded
	I0120 12:24:56.590568  580663 cache.go:87] Successfully saved all images to host disk.
	I0120 12:24:56.590843  580663 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:24:56.590885  580663 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:24:56.609126  580663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43249
	I0120 12:24:56.609634  580663 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:24:56.610330  580663 main.go:141] libmachine: Using API Version  1
	I0120 12:24:56.610358  580663 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:24:56.610688  580663 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:24:56.610900  580663 main.go:141] libmachine: (no-preload-677886) Calling .DriverName
	I0120 12:24:56.611061  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetState
	I0120 12:24:56.613327  580663 fix.go:112] recreateIfNeeded on no-preload-677886: state=Stopped err=<nil>
	I0120 12:24:56.613369  580663 main.go:141] libmachine: (no-preload-677886) Calling .DriverName
	W0120 12:24:56.613547  580663 fix.go:138] unexpected machine state, will restart: <nil>
	I0120 12:24:56.615784  580663 out.go:177] * Restarting existing kvm2 VM for "no-preload-677886" ...
	I0120 12:24:56.617166  580663 main.go:141] libmachine: (no-preload-677886) Calling .Start
	I0120 12:24:56.617403  580663 main.go:141] libmachine: (no-preload-677886) starting domain...
	I0120 12:24:56.617428  580663 main.go:141] libmachine: (no-preload-677886) ensuring networks are active...
	I0120 12:24:56.618493  580663 main.go:141] libmachine: (no-preload-677886) Ensuring network default is active
	I0120 12:24:56.618996  580663 main.go:141] libmachine: (no-preload-677886) Ensuring network mk-no-preload-677886 is active
	I0120 12:24:56.619551  580663 main.go:141] libmachine: (no-preload-677886) getting domain XML...
	I0120 12:24:56.620569  580663 main.go:141] libmachine: (no-preload-677886) creating domain...
	I0120 12:24:58.098571  580663 main.go:141] libmachine: (no-preload-677886) waiting for IP...
	I0120 12:24:58.099691  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:24:58.100113  580663 main.go:141] libmachine: (no-preload-677886) DBG | unable to find current IP address of domain no-preload-677886 in network mk-no-preload-677886
	I0120 12:24:58.100379  580663 main.go:141] libmachine: (no-preload-677886) DBG | I0120 12:24:58.100141  580698 retry.go:31] will retry after 196.998651ms: waiting for domain to come up
	I0120 12:24:58.299005  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:24:58.299649  580663 main.go:141] libmachine: (no-preload-677886) DBG | unable to find current IP address of domain no-preload-677886 in network mk-no-preload-677886
	I0120 12:24:58.299683  580663 main.go:141] libmachine: (no-preload-677886) DBG | I0120 12:24:58.299605  580698 retry.go:31] will retry after 315.24245ms: waiting for domain to come up
	I0120 12:24:58.616292  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:24:58.616904  580663 main.go:141] libmachine: (no-preload-677886) DBG | unable to find current IP address of domain no-preload-677886 in network mk-no-preload-677886
	I0120 12:24:58.616939  580663 main.go:141] libmachine: (no-preload-677886) DBG | I0120 12:24:58.616849  580698 retry.go:31] will retry after 406.941804ms: waiting for domain to come up
	I0120 12:24:59.025591  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:24:59.026266  580663 main.go:141] libmachine: (no-preload-677886) DBG | unable to find current IP address of domain no-preload-677886 in network mk-no-preload-677886
	I0120 12:24:59.026295  580663 main.go:141] libmachine: (no-preload-677886) DBG | I0120 12:24:59.026214  580698 retry.go:31] will retry after 583.374913ms: waiting for domain to come up
	I0120 12:24:59.610886  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:24:59.611404  580663 main.go:141] libmachine: (no-preload-677886) DBG | unable to find current IP address of domain no-preload-677886 in network mk-no-preload-677886
	I0120 12:24:59.611431  580663 main.go:141] libmachine: (no-preload-677886) DBG | I0120 12:24:59.611365  580698 retry.go:31] will retry after 580.640955ms: waiting for domain to come up
	I0120 12:25:00.193188  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:25:00.193688  580663 main.go:141] libmachine: (no-preload-677886) DBG | unable to find current IP address of domain no-preload-677886 in network mk-no-preload-677886
	I0120 12:25:00.193721  580663 main.go:141] libmachine: (no-preload-677886) DBG | I0120 12:25:00.193666  580698 retry.go:31] will retry after 767.186037ms: waiting for domain to come up
	I0120 12:25:00.962901  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:25:00.963487  580663 main.go:141] libmachine: (no-preload-677886) DBG | unable to find current IP address of domain no-preload-677886 in network mk-no-preload-677886
	I0120 12:25:00.963557  580663 main.go:141] libmachine: (no-preload-677886) DBG | I0120 12:25:00.963442  580698 retry.go:31] will retry after 784.374872ms: waiting for domain to come up
	I0120 12:25:01.749153  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:25:01.749729  580663 main.go:141] libmachine: (no-preload-677886) DBG | unable to find current IP address of domain no-preload-677886 in network mk-no-preload-677886
	I0120 12:25:01.749762  580663 main.go:141] libmachine: (no-preload-677886) DBG | I0120 12:25:01.749683  580698 retry.go:31] will retry after 985.496204ms: waiting for domain to come up
	I0120 12:25:02.736982  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:25:02.737613  580663 main.go:141] libmachine: (no-preload-677886) DBG | unable to find current IP address of domain no-preload-677886 in network mk-no-preload-677886
	I0120 12:25:02.737645  580663 main.go:141] libmachine: (no-preload-677886) DBG | I0120 12:25:02.737573  580698 retry.go:31] will retry after 1.287227851s: waiting for domain to come up
	I0120 12:25:04.027162  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:25:04.027595  580663 main.go:141] libmachine: (no-preload-677886) DBG | unable to find current IP address of domain no-preload-677886 in network mk-no-preload-677886
	I0120 12:25:04.027641  580663 main.go:141] libmachine: (no-preload-677886) DBG | I0120 12:25:04.027590  580698 retry.go:31] will retry after 2.033306338s: waiting for domain to come up
	I0120 12:25:06.062268  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:25:06.062806  580663 main.go:141] libmachine: (no-preload-677886) DBG | unable to find current IP address of domain no-preload-677886 in network mk-no-preload-677886
	I0120 12:25:06.062834  580663 main.go:141] libmachine: (no-preload-677886) DBG | I0120 12:25:06.062769  580698 retry.go:31] will retry after 2.791569905s: waiting for domain to come up
	I0120 12:25:08.855885  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:25:08.856539  580663 main.go:141] libmachine: (no-preload-677886) DBG | unable to find current IP address of domain no-preload-677886 in network mk-no-preload-677886
	I0120 12:25:08.856567  580663 main.go:141] libmachine: (no-preload-677886) DBG | I0120 12:25:08.856507  580698 retry.go:31] will retry after 2.690350592s: waiting for domain to come up
	I0120 12:25:11.550477  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:25:11.551079  580663 main.go:141] libmachine: (no-preload-677886) DBG | unable to find current IP address of domain no-preload-677886 in network mk-no-preload-677886
	I0120 12:25:11.551109  580663 main.go:141] libmachine: (no-preload-677886) DBG | I0120 12:25:11.551004  580698 retry.go:31] will retry after 3.84625692s: waiting for domain to come up
	I0120 12:25:15.401681  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:25:15.402320  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has current primary IP address 192.168.72.157 and MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:25:15.402366  580663 main.go:141] libmachine: (no-preload-677886) found domain IP: 192.168.72.157
	I0120 12:25:15.402381  580663 main.go:141] libmachine: (no-preload-677886) reserving static IP address...
	I0120 12:25:15.402786  580663 main.go:141] libmachine: (no-preload-677886) reserved static IP address 192.168.72.157 for domain no-preload-677886
	I0120 12:25:15.402814  580663 main.go:141] libmachine: (no-preload-677886) waiting for SSH...
	I0120 12:25:15.402835  580663 main.go:141] libmachine: (no-preload-677886) DBG | found host DHCP lease matching {name: "no-preload-677886", mac: "52:54:00:3c:87:c0", ip: "192.168.72.157"} in network mk-no-preload-677886: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:44 +0000 UTC Type:0 Mac:52:54:00:3c:87:c0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-677886 Clientid:01:52:54:00:3c:87:c0}
	I0120 12:25:15.402860  580663 main.go:141] libmachine: (no-preload-677886) DBG | skip adding static IP to network mk-no-preload-677886 - found existing host DHCP lease matching {name: "no-preload-677886", mac: "52:54:00:3c:87:c0", ip: "192.168.72.157"}
	I0120 12:25:15.402873  580663 main.go:141] libmachine: (no-preload-677886) DBG | Getting to WaitForSSH function...
	I0120 12:25:15.405269  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:25:15.405604  580663 main.go:141] libmachine: (no-preload-677886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:87:c0", ip: ""} in network mk-no-preload-677886: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:44 +0000 UTC Type:0 Mac:52:54:00:3c:87:c0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-677886 Clientid:01:52:54:00:3c:87:c0}
	I0120 12:25:15.405626  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined IP address 192.168.72.157 and MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:25:15.405761  580663 main.go:141] libmachine: (no-preload-677886) DBG | Using SSH client type: external
	I0120 12:25:15.405775  580663 main.go:141] libmachine: (no-preload-677886) DBG | Using SSH private key: /home/jenkins/minikube-integration/20151-530330/.minikube/machines/no-preload-677886/id_rsa (-rw-------)
	I0120 12:25:15.405832  580663 main.go:141] libmachine: (no-preload-677886) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.157 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20151-530330/.minikube/machines/no-preload-677886/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 12:25:15.405857  580663 main.go:141] libmachine: (no-preload-677886) DBG | About to run SSH command:
	I0120 12:25:15.405877  580663 main.go:141] libmachine: (no-preload-677886) DBG | exit 0
	I0120 12:25:15.530526  580663 main.go:141] libmachine: (no-preload-677886) DBG | SSH cmd err, output: <nil>: 
	I0120 12:25:15.530944  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetConfigRaw
	I0120 12:25:15.531629  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetIP
	I0120 12:25:15.534406  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:25:15.534911  580663 main.go:141] libmachine: (no-preload-677886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:87:c0", ip: ""} in network mk-no-preload-677886: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:44 +0000 UTC Type:0 Mac:52:54:00:3c:87:c0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-677886 Clientid:01:52:54:00:3c:87:c0}
	I0120 12:25:15.534958  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined IP address 192.168.72.157 and MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:25:15.535249  580663 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/no-preload-677886/config.json ...
	I0120 12:25:15.535471  580663 machine.go:93] provisionDockerMachine start ...
	I0120 12:25:15.535490  580663 main.go:141] libmachine: (no-preload-677886) Calling .DriverName
	I0120 12:25:15.535721  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHHostname
	I0120 12:25:15.538459  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:25:15.538821  580663 main.go:141] libmachine: (no-preload-677886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:87:c0", ip: ""} in network mk-no-preload-677886: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:44 +0000 UTC Type:0 Mac:52:54:00:3c:87:c0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-677886 Clientid:01:52:54:00:3c:87:c0}
	I0120 12:25:15.538844  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined IP address 192.168.72.157 and MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:25:15.539004  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHPort
	I0120 12:25:15.539194  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHKeyPath
	I0120 12:25:15.539379  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHKeyPath
	I0120 12:25:15.539551  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHUsername
	I0120 12:25:15.539760  580663 main.go:141] libmachine: Using SSH client type: native
	I0120 12:25:15.540012  580663 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I0120 12:25:15.540025  580663 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 12:25:15.646562  580663 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0120 12:25:15.646591  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetMachineName
	I0120 12:25:15.646876  580663 buildroot.go:166] provisioning hostname "no-preload-677886"
	I0120 12:25:15.646908  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetMachineName
	I0120 12:25:15.647130  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHHostname
	I0120 12:25:15.650308  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:25:15.650669  580663 main.go:141] libmachine: (no-preload-677886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:87:c0", ip: ""} in network mk-no-preload-677886: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:44 +0000 UTC Type:0 Mac:52:54:00:3c:87:c0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-677886 Clientid:01:52:54:00:3c:87:c0}
	I0120 12:25:15.650698  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined IP address 192.168.72.157 and MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:25:15.650879  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHPort
	I0120 12:25:15.651128  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHKeyPath
	I0120 12:25:15.651342  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHKeyPath
	I0120 12:25:15.651556  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHUsername
	I0120 12:25:15.651786  580663 main.go:141] libmachine: Using SSH client type: native
	I0120 12:25:15.652025  580663 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I0120 12:25:15.652053  580663 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-677886 && echo "no-preload-677886" | sudo tee /etc/hostname
	I0120 12:25:15.768606  580663 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-677886
	
	I0120 12:25:15.768640  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHHostname
	I0120 12:25:15.771694  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:25:15.772037  580663 main.go:141] libmachine: (no-preload-677886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:87:c0", ip: ""} in network mk-no-preload-677886: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:44 +0000 UTC Type:0 Mac:52:54:00:3c:87:c0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-677886 Clientid:01:52:54:00:3c:87:c0}
	I0120 12:25:15.772087  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined IP address 192.168.72.157 and MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:25:15.772269  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHPort
	I0120 12:25:15.772467  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHKeyPath
	I0120 12:25:15.772674  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHKeyPath
	I0120 12:25:15.772805  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHUsername
	I0120 12:25:15.772937  580663 main.go:141] libmachine: Using SSH client type: native
	I0120 12:25:15.773113  580663 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I0120 12:25:15.773128  580663 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-677886' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-677886/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-677886' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 12:25:15.879097  580663 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 12:25:15.879135  580663 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20151-530330/.minikube CaCertPath:/home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20151-530330/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20151-530330/.minikube}
	I0120 12:25:15.879160  580663 buildroot.go:174] setting up certificates
	I0120 12:25:15.879175  580663 provision.go:84] configureAuth start
	I0120 12:25:15.879203  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetMachineName
	I0120 12:25:15.879546  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetIP
	I0120 12:25:15.882077  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:25:15.882472  580663 main.go:141] libmachine: (no-preload-677886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:87:c0", ip: ""} in network mk-no-preload-677886: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:44 +0000 UTC Type:0 Mac:52:54:00:3c:87:c0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-677886 Clientid:01:52:54:00:3c:87:c0}
	I0120 12:25:15.882503  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined IP address 192.168.72.157 and MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:25:15.882635  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHHostname
	I0120 12:25:15.884841  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:25:15.885175  580663 main.go:141] libmachine: (no-preload-677886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:87:c0", ip: ""} in network mk-no-preload-677886: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:44 +0000 UTC Type:0 Mac:52:54:00:3c:87:c0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-677886 Clientid:01:52:54:00:3c:87:c0}
	I0120 12:25:15.885215  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined IP address 192.168.72.157 and MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:25:15.885392  580663 provision.go:143] copyHostCerts
	I0120 12:25:15.885460  580663 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-530330/.minikube/ca.pem, removing ...
	I0120 12:25:15.885483  580663 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-530330/.minikube/ca.pem
	I0120 12:25:15.885554  580663 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20151-530330/.minikube/ca.pem (1078 bytes)
	I0120 12:25:15.885685  580663 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-530330/.minikube/cert.pem, removing ...
	I0120 12:25:15.885695  580663 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-530330/.minikube/cert.pem
	I0120 12:25:15.885727  580663 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20151-530330/.minikube/cert.pem (1123 bytes)
	I0120 12:25:15.885830  580663 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-530330/.minikube/key.pem, removing ...
	I0120 12:25:15.885840  580663 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-530330/.minikube/key.pem
	I0120 12:25:15.885869  580663 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20151-530330/.minikube/key.pem (1675 bytes)
	I0120 12:25:15.885949  580663 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20151-530330/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca-key.pem org=jenkins.no-preload-677886 san=[127.0.0.1 192.168.72.157 localhost minikube no-preload-677886]
	I0120 12:25:16.005597  580663 provision.go:177] copyRemoteCerts
	I0120 12:25:16.005691  580663 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 12:25:16.005730  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHHostname
	I0120 12:25:16.008891  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:25:16.009345  580663 main.go:141] libmachine: (no-preload-677886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:87:c0", ip: ""} in network mk-no-preload-677886: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:44 +0000 UTC Type:0 Mac:52:54:00:3c:87:c0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-677886 Clientid:01:52:54:00:3c:87:c0}
	I0120 12:25:16.009389  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined IP address 192.168.72.157 and MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:25:16.009623  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHPort
	I0120 12:25:16.009837  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHKeyPath
	I0120 12:25:16.010002  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHUsername
	I0120 12:25:16.010130  580663 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/no-preload-677886/id_rsa Username:docker}
	I0120 12:25:16.099759  580663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0120 12:25:16.130170  580663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0120 12:25:16.157604  580663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0120 12:25:16.187763  580663 provision.go:87] duration metric: took 308.558766ms to configureAuth
	I0120 12:25:16.187795  580663 buildroot.go:189] setting minikube options for container-runtime
	I0120 12:25:16.188011  580663 config.go:182] Loaded profile config "no-preload-677886": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 12:25:16.188031  580663 machine.go:96] duration metric: took 652.54508ms to provisionDockerMachine
	I0120 12:25:16.188043  580663 start.go:293] postStartSetup for "no-preload-677886" (driver="kvm2")
	I0120 12:25:16.188057  580663 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 12:25:16.188094  580663 main.go:141] libmachine: (no-preload-677886) Calling .DriverName
	I0120 12:25:16.188456  580663 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 12:25:16.188498  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHHostname
	I0120 12:25:16.191394  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:25:16.191712  580663 main.go:141] libmachine: (no-preload-677886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:87:c0", ip: ""} in network mk-no-preload-677886: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:44 +0000 UTC Type:0 Mac:52:54:00:3c:87:c0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-677886 Clientid:01:52:54:00:3c:87:c0}
	I0120 12:25:16.191751  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined IP address 192.168.72.157 and MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:25:16.191878  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHPort
	I0120 12:25:16.192087  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHKeyPath
	I0120 12:25:16.192265  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHUsername
	I0120 12:25:16.192419  580663 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/no-preload-677886/id_rsa Username:docker}
	I0120 12:25:16.277146  580663 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 12:25:16.282163  580663 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 12:25:16.282202  580663 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-530330/.minikube/addons for local assets ...
	I0120 12:25:16.282264  580663 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-530330/.minikube/files for local assets ...
	I0120 12:25:16.282348  580663 filesync.go:149] local asset: /home/jenkins/minikube-integration/20151-530330/.minikube/files/etc/ssl/certs/5375812.pem -> 5375812.pem in /etc/ssl/certs
	I0120 12:25:16.282491  580663 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 12:25:16.292957  580663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/files/etc/ssl/certs/5375812.pem --> /etc/ssl/certs/5375812.pem (1708 bytes)
	I0120 12:25:16.323353  580663 start.go:296] duration metric: took 135.288428ms for postStartSetup
	I0120 12:25:16.323414  580663 fix.go:56] duration metric: took 19.732994766s for fixHost
	I0120 12:25:16.323444  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHHostname
	I0120 12:25:16.326291  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:25:16.326728  580663 main.go:141] libmachine: (no-preload-677886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:87:c0", ip: ""} in network mk-no-preload-677886: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:44 +0000 UTC Type:0 Mac:52:54:00:3c:87:c0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-677886 Clientid:01:52:54:00:3c:87:c0}
	I0120 12:25:16.326762  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined IP address 192.168.72.157 and MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:25:16.326921  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHPort
	I0120 12:25:16.327120  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHKeyPath
	I0120 12:25:16.327275  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHKeyPath
	I0120 12:25:16.327441  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHUsername
	I0120 12:25:16.327645  580663 main.go:141] libmachine: Using SSH client type: native
	I0120 12:25:16.327894  580663 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I0120 12:25:16.327909  580663 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 12:25:16.435263  580663 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737375916.389485996
	
	I0120 12:25:16.435316  580663 fix.go:216] guest clock: 1737375916.389485996
	I0120 12:25:16.435327  580663 fix.go:229] Guest: 2025-01-20 12:25:16.389485996 +0000 UTC Remote: 2025-01-20 12:25:16.323419583 +0000 UTC m=+19.915192404 (delta=66.066413ms)
	I0120 12:25:16.435358  580663 fix.go:200] guest clock delta is within tolerance: 66.066413ms
	I0120 12:25:16.435365  580663 start.go:83] releasing machines lock for "no-preload-677886", held for 19.844964569s
	I0120 12:25:16.435397  580663 main.go:141] libmachine: (no-preload-677886) Calling .DriverName
	I0120 12:25:16.435687  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetIP
	I0120 12:25:16.438862  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:25:16.439261  580663 main.go:141] libmachine: (no-preload-677886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:87:c0", ip: ""} in network mk-no-preload-677886: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:44 +0000 UTC Type:0 Mac:52:54:00:3c:87:c0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-677886 Clientid:01:52:54:00:3c:87:c0}
	I0120 12:25:16.439292  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined IP address 192.168.72.157 and MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:25:16.439707  580663 main.go:141] libmachine: (no-preload-677886) Calling .DriverName
	I0120 12:25:16.440382  580663 main.go:141] libmachine: (no-preload-677886) Calling .DriverName
	I0120 12:25:16.440600  580663 main.go:141] libmachine: (no-preload-677886) Calling .DriverName
	I0120 12:25:16.440714  580663 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 12:25:16.440777  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHHostname
	I0120 12:25:16.440934  580663 ssh_runner.go:195] Run: cat /version.json
	I0120 12:25:16.440970  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHHostname
	I0120 12:25:16.444124  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:25:16.444356  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:25:16.444539  580663 main.go:141] libmachine: (no-preload-677886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:87:c0", ip: ""} in network mk-no-preload-677886: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:44 +0000 UTC Type:0 Mac:52:54:00:3c:87:c0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-677886 Clientid:01:52:54:00:3c:87:c0}
	I0120 12:25:16.444579  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined IP address 192.168.72.157 and MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:25:16.444741  580663 main.go:141] libmachine: (no-preload-677886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:87:c0", ip: ""} in network mk-no-preload-677886: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:44 +0000 UTC Type:0 Mac:52:54:00:3c:87:c0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-677886 Clientid:01:52:54:00:3c:87:c0}
	I0120 12:25:16.444760  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHPort
	I0120 12:25:16.444767  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined IP address 192.168.72.157 and MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:25:16.444974  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHPort
	I0120 12:25:16.445026  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHKeyPath
	I0120 12:25:16.445153  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHKeyPath
	I0120 12:25:16.445206  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHUsername
	I0120 12:25:16.445412  580663 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/no-preload-677886/id_rsa Username:docker}
	I0120 12:25:16.445429  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHUsername
	I0120 12:25:16.445622  580663 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/no-preload-677886/id_rsa Username:docker}
	I0120 12:25:16.523376  580663 ssh_runner.go:195] Run: systemctl --version
	I0120 12:25:16.551805  580663 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 12:25:16.560103  580663 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 12:25:16.560184  580663 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 12:25:16.585768  580663 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 12:25:16.585821  580663 start.go:495] detecting cgroup driver to use...
	I0120 12:25:16.585918  580663 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0120 12:25:16.619412  580663 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0120 12:25:16.634018  580663 docker.go:217] disabling cri-docker service (if available) ...
	I0120 12:25:16.634091  580663 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 12:25:16.650862  580663 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 12:25:16.667222  580663 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 12:25:16.827621  580663 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 12:25:16.997836  580663 docker.go:233] disabling docker service ...
	I0120 12:25:16.997920  580663 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 12:25:17.012952  580663 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 12:25:17.033066  580663 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 12:25:17.184785  580663 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 12:25:17.308240  580663 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 12:25:17.323018  580663 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 12:25:17.346117  580663 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0120 12:25:17.362604  580663 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0120 12:25:17.377268  580663 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0120 12:25:17.377358  580663 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0120 12:25:17.389938  580663 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0120 12:25:17.401504  580663 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0120 12:25:17.412628  580663 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0120 12:25:17.423600  580663 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 12:25:17.434784  580663 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0120 12:25:17.446433  580663 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0120 12:25:17.457770  580663 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0120 12:25:17.470005  580663 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 12:25:17.480134  580663 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 12:25:17.480204  580663 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 12:25:17.495835  580663 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 12:25:17.506603  580663 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:25:17.647336  580663 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0120 12:25:17.679291  580663 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0120 12:25:17.679405  580663 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0120 12:25:17.684614  580663 retry.go:31] will retry after 596.77903ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0120 12:25:18.282567  580663 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0120 12:25:18.288477  580663 start.go:563] Will wait 60s for crictl version
	I0120 12:25:18.288558  580663 ssh_runner.go:195] Run: which crictl
	I0120 12:25:18.293095  580663 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 12:25:18.339384  580663 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0120 12:25:18.339513  580663 ssh_runner.go:195] Run: containerd --version
	I0120 12:25:18.371062  580663 ssh_runner.go:195] Run: containerd --version
	I0120 12:25:18.401306  580663 out.go:177] * Preparing Kubernetes v1.32.0 on containerd 1.7.23 ...
	I0120 12:25:18.402946  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetIP
	I0120 12:25:18.406062  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:25:18.406509  580663 main.go:141] libmachine: (no-preload-677886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:87:c0", ip: ""} in network mk-no-preload-677886: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:44 +0000 UTC Type:0 Mac:52:54:00:3c:87:c0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-677886 Clientid:01:52:54:00:3c:87:c0}
	I0120 12:25:18.406529  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined IP address 192.168.72.157 and MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:25:18.406815  580663 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0120 12:25:18.411947  580663 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 12:25:18.425347  580663 kubeadm.go:883] updating cluster {Name:no-preload-677886 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:no-preload-677886 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.157 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 12:25:18.425473  580663 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
	I0120 12:25:18.425516  580663 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:25:18.462916  580663 containerd.go:627] all images are preloaded for containerd runtime.
	I0120 12:25:18.462946  580663 cache_images.go:84] Images are preloaded, skipping loading
	I0120 12:25:18.462957  580663 kubeadm.go:934] updating node { 192.168.72.157 8443 v1.32.0 containerd true true} ...
	I0120 12:25:18.463086  580663 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-677886 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.157
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:no-preload-677886 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 12:25:18.463159  580663 ssh_runner.go:195] Run: sudo crictl info
	I0120 12:25:18.499236  580663 cni.go:84] Creating CNI manager for ""
	I0120 12:25:18.499264  580663 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0120 12:25:18.499280  580663 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 12:25:18.499310  580663 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.157 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-677886 NodeName:no-preload-677886 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.157"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.157 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 12:25:18.499474  580663 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.157
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-677886"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.157"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.157"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 12:25:18.499563  580663 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 12:25:18.510525  580663 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 12:25:18.510642  580663 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 12:25:18.524295  580663 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0120 12:25:18.543425  580663 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 12:25:18.561360  580663 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2313 bytes)
	I0120 12:25:18.581273  580663 ssh_runner.go:195] Run: grep 192.168.72.157	control-plane.minikube.internal$ /etc/hosts
	I0120 12:25:18.593128  580663 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.157	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 12:25:18.606167  580663 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:25:18.729737  580663 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:25:18.753136  580663 certs.go:68] Setting up /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/no-preload-677886 for IP: 192.168.72.157
	I0120 12:25:18.753159  580663 certs.go:194] generating shared ca certs ...
	I0120 12:25:18.753178  580663 certs.go:226] acquiring lock for ca certs: {Name:mk52c62007c989bdf47cf8ee68bb49e4d4d8996b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:25:18.753337  580663 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20151-530330/.minikube/ca.key
	I0120 12:25:18.753395  580663 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20151-530330/.minikube/proxy-client-ca.key
	I0120 12:25:18.753409  580663 certs.go:256] generating profile certs ...
	I0120 12:25:18.753519  580663 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/no-preload-677886/client.key
	I0120 12:25:18.753605  580663 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/no-preload-677886/apiserver.key.8959decb
	I0120 12:25:18.753660  580663 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/no-preload-677886/proxy-client.key
	I0120 12:25:18.753790  580663 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/537581.pem (1338 bytes)
	W0120 12:25:18.753853  580663 certs.go:480] ignoring /home/jenkins/minikube-integration/20151-530330/.minikube/certs/537581_empty.pem, impossibly tiny 0 bytes
	I0120 12:25:18.753869  580663 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca-key.pem (1679 bytes)
	I0120 12:25:18.753902  580663 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca.pem (1078 bytes)
	I0120 12:25:18.753934  580663 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/cert.pem (1123 bytes)
	I0120 12:25:18.753966  580663 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/key.pem (1675 bytes)
	I0120 12:25:18.754031  580663 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-530330/.minikube/files/etc/ssl/certs/5375812.pem (1708 bytes)
	I0120 12:25:18.755002  580663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 12:25:18.810090  580663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0120 12:25:18.853328  580663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 12:25:18.889238  580663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0120 12:25:18.925460  580663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/no-preload-677886/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0120 12:25:18.962448  580663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/no-preload-677886/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0120 12:25:18.999369  580663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/no-preload-677886/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 12:25:19.032057  580663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/no-preload-677886/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 12:25:19.061632  580663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 12:25:19.091446  580663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/certs/537581.pem --> /usr/share/ca-certificates/537581.pem (1338 bytes)
	I0120 12:25:19.118422  580663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/files/etc/ssl/certs/5375812.pem --> /usr/share/ca-certificates/5375812.pem (1708 bytes)
	I0120 12:25:19.143431  580663 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 12:25:19.162253  580663 ssh_runner.go:195] Run: openssl version
	I0120 12:25:19.168374  580663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537581.pem && ln -fs /usr/share/ca-certificates/537581.pem /etc/ssl/certs/537581.pem"
	I0120 12:25:19.180856  580663 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537581.pem
	I0120 12:25:19.185868  580663 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 11:24 /usr/share/ca-certificates/537581.pem
	I0120 12:25:19.185929  580663 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537581.pem
	I0120 12:25:19.192441  580663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537581.pem /etc/ssl/certs/51391683.0"
	I0120 12:25:19.205064  580663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5375812.pem && ln -fs /usr/share/ca-certificates/5375812.pem /etc/ssl/certs/5375812.pem"
	I0120 12:25:19.221620  580663 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5375812.pem
	I0120 12:25:19.227409  580663 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 11:24 /usr/share/ca-certificates/5375812.pem
	I0120 12:25:19.227483  580663 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5375812.pem
	I0120 12:25:19.235639  580663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5375812.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 12:25:19.247521  580663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 12:25:19.259669  580663 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:25:19.265367  580663 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 11:16 /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:25:19.265458  580663 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:25:19.272666  580663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 12:25:19.286126  580663 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 12:25:19.291058  580663 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 12:25:19.297354  580663 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 12:25:19.303419  580663 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 12:25:19.310027  580663 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 12:25:19.317795  580663 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 12:25:19.325533  580663 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 12:25:19.331891  580663 kubeadm.go:392] StartCluster: {Name:no-preload-677886 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:no-preload-677886 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.157 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:25:19.332000  580663 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0120 12:25:19.332050  580663 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 12:25:19.374646  580663 cri.go:89] found id: "bfa57f7d3617765871ed0a201c2d868cd3eb881ec9ee84e3798741d3978fa323"
	I0120 12:25:19.374676  580663 cri.go:89] found id: "8889bf48964a56ae72872b426a20a0932902b06c15b99a3de1e3c8ba1b04bfc7"
	I0120 12:25:19.374679  580663 cri.go:89] found id: "eea6f34f583b30cc484060412a97a0bc3c95afa3df578ef1e07bd6b681bc54a9"
	I0120 12:25:19.374682  580663 cri.go:89] found id: "222c27052e1031adbafb6744b25584f19a0a8dc63a205d42dc5f229d1ff60e47"
	I0120 12:25:19.374685  580663 cri.go:89] found id: "b4c070e430ab49b25459e2a02d6216019116c10d64063980782cb619692fd16b"
	I0120 12:25:19.374688  580663 cri.go:89] found id: "7509889b1127538ae07b5ab56638d11a21a970fc8972875f72e92215b2ace3c1"
	I0120 12:25:19.374691  580663 cri.go:89] found id: "6c9ede0755ce5eb4cd9c9c3289194d11bbfeb9a9706f0fb91fd6c48f8b86d94a"
	I0120 12:25:19.374694  580663 cri.go:89] found id: "5db0fac76c1d13f3d7b5654b5e72844e7f50678029ad9fd190b65870619d03fd"
	I0120 12:25:19.374696  580663 cri.go:89] found id: ""
	I0120 12:25:19.374743  580663 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0120 12:25:19.391060  580663 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-20T12:25:19Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0120 12:25:19.391199  580663 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 12:25:19.402737  580663 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0120 12:25:19.402763  580663 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0120 12:25:19.402827  580663 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0120 12:25:19.414851  580663 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0120 12:25:19.416024  580663 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-677886" does not appear in /home/jenkins/minikube-integration/20151-530330/kubeconfig
	I0120 12:25:19.416621  580663 kubeconfig.go:62] /home/jenkins/minikube-integration/20151-530330/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-677886" cluster setting kubeconfig missing "no-preload-677886" context setting]
	I0120 12:25:19.417328  580663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-530330/kubeconfig: {Name:mk113e13541afa8413ea8a359169b0824f5f9ef4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:25:19.419325  580663 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0120 12:25:19.430541  580663 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.157
	I0120 12:25:19.430579  580663 kubeadm.go:1160] stopping kube-system containers ...
	I0120 12:25:19.430599  580663 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0120 12:25:19.430659  580663 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 12:25:19.487663  580663 cri.go:89] found id: "bfa57f7d3617765871ed0a201c2d868cd3eb881ec9ee84e3798741d3978fa323"
	I0120 12:25:19.487695  580663 cri.go:89] found id: "8889bf48964a56ae72872b426a20a0932902b06c15b99a3de1e3c8ba1b04bfc7"
	I0120 12:25:19.487702  580663 cri.go:89] found id: "eea6f34f583b30cc484060412a97a0bc3c95afa3df578ef1e07bd6b681bc54a9"
	I0120 12:25:19.487707  580663 cri.go:89] found id: "222c27052e1031adbafb6744b25584f19a0a8dc63a205d42dc5f229d1ff60e47"
	I0120 12:25:19.487712  580663 cri.go:89] found id: "b4c070e430ab49b25459e2a02d6216019116c10d64063980782cb619692fd16b"
	I0120 12:25:19.487717  580663 cri.go:89] found id: "7509889b1127538ae07b5ab56638d11a21a970fc8972875f72e92215b2ace3c1"
	I0120 12:25:19.487721  580663 cri.go:89] found id: "6c9ede0755ce5eb4cd9c9c3289194d11bbfeb9a9706f0fb91fd6c48f8b86d94a"
	I0120 12:25:19.487725  580663 cri.go:89] found id: "5db0fac76c1d13f3d7b5654b5e72844e7f50678029ad9fd190b65870619d03fd"
	I0120 12:25:19.487729  580663 cri.go:89] found id: ""
	I0120 12:25:19.487736  580663 cri.go:252] Stopping containers: [bfa57f7d3617765871ed0a201c2d868cd3eb881ec9ee84e3798741d3978fa323 8889bf48964a56ae72872b426a20a0932902b06c15b99a3de1e3c8ba1b04bfc7 eea6f34f583b30cc484060412a97a0bc3c95afa3df578ef1e07bd6b681bc54a9 222c27052e1031adbafb6744b25584f19a0a8dc63a205d42dc5f229d1ff60e47 b4c070e430ab49b25459e2a02d6216019116c10d64063980782cb619692fd16b 7509889b1127538ae07b5ab56638d11a21a970fc8972875f72e92215b2ace3c1 6c9ede0755ce5eb4cd9c9c3289194d11bbfeb9a9706f0fb91fd6c48f8b86d94a 5db0fac76c1d13f3d7b5654b5e72844e7f50678029ad9fd190b65870619d03fd]
	I0120 12:25:19.487797  580663 ssh_runner.go:195] Run: which crictl
	I0120 12:25:19.492093  580663 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 bfa57f7d3617765871ed0a201c2d868cd3eb881ec9ee84e3798741d3978fa323 8889bf48964a56ae72872b426a20a0932902b06c15b99a3de1e3c8ba1b04bfc7 eea6f34f583b30cc484060412a97a0bc3c95afa3df578ef1e07bd6b681bc54a9 222c27052e1031adbafb6744b25584f19a0a8dc63a205d42dc5f229d1ff60e47 b4c070e430ab49b25459e2a02d6216019116c10d64063980782cb619692fd16b 7509889b1127538ae07b5ab56638d11a21a970fc8972875f72e92215b2ace3c1 6c9ede0755ce5eb4cd9c9c3289194d11bbfeb9a9706f0fb91fd6c48f8b86d94a 5db0fac76c1d13f3d7b5654b5e72844e7f50678029ad9fd190b65870619d03fd
	I0120 12:25:19.531809  580663 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0120 12:25:19.549013  580663 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:25:19.563634  580663 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:25:19.563664  580663 kubeadm.go:157] found existing configuration files:
	
	I0120 12:25:19.563724  580663 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 12:25:19.576840  580663 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:25:19.576904  580663 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:25:19.591965  580663 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 12:25:19.602797  580663 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:25:19.602868  580663 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:25:19.616597  580663 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 12:25:19.629930  580663 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:25:19.630018  580663 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:25:19.643805  580663 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 12:25:19.656962  580663 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:25:19.657040  580663 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:25:19.671375  580663 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 12:25:19.685780  580663 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:25:19.836161  580663 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:25:20.692199  580663 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:25:20.897505  580663 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:25:20.970999  580663 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:25:21.088635  580663 api_server.go:52] waiting for apiserver process to appear ...
	I0120 12:25:21.088732  580663 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:25:21.589031  580663 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:25:22.088913  580663 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:25:22.121572  580663 api_server.go:72] duration metric: took 1.032934898s to wait for apiserver process to appear ...
	I0120 12:25:22.121609  580663 api_server.go:88] waiting for apiserver healthz status ...
	I0120 12:25:22.121635  580663 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0120 12:25:22.122270  580663 api_server.go:269] stopped: https://192.168.72.157:8443/healthz: Get "https://192.168.72.157:8443/healthz": dial tcp 192.168.72.157:8443: connect: connection refused
	I0120 12:25:22.621890  580663 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0120 12:25:25.087924  580663 api_server.go:279] https://192.168.72.157:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0120 12:25:25.087959  580663 api_server.go:103] status: https://192.168.72.157:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0120 12:25:25.087981  580663 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0120 12:25:25.116120  580663 api_server.go:279] https://192.168.72.157:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0120 12:25:25.116148  580663 api_server.go:103] status: https://192.168.72.157:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0120 12:25:25.122385  580663 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0120 12:25:25.193884  580663 api_server.go:279] https://192.168.72.157:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 12:25:25.193938  580663 api_server.go:103] status: https://192.168.72.157:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 12:25:25.622588  580663 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0120 12:25:25.627048  580663 api_server.go:279] https://192.168.72.157:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 12:25:25.627072  580663 api_server.go:103] status: https://192.168.72.157:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 12:25:26.121711  580663 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0120 12:25:26.131103  580663 api_server.go:279] https://192.168.72.157:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 12:25:26.131131  580663 api_server.go:103] status: https://192.168.72.157:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 12:25:26.621857  580663 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0120 12:25:26.630738  580663 api_server.go:279] https://192.168.72.157:8443/healthz returned 200:
	ok
	I0120 12:25:26.641660  580663 api_server.go:141] control plane version: v1.32.0
	I0120 12:25:26.641688  580663 api_server.go:131] duration metric: took 4.520071397s to wait for apiserver health ...
	I0120 12:25:26.641697  580663 cni.go:84] Creating CNI manager for ""
	I0120 12:25:26.641703  580663 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0120 12:25:26.643494  580663 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 12:25:26.645193  580663 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 12:25:26.665039  580663 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 12:25:26.693649  580663 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 12:25:26.703765  580663 system_pods.go:59] 8 kube-system pods found
	I0120 12:25:26.703803  580663 system_pods.go:61] "coredns-668d6bf9bc-zb8zw" [76792e2d-784e-40bd-8f41-dff4f5d2a000] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0120 12:25:26.703815  580663 system_pods.go:61] "etcd-no-preload-677886" [19c08e3a-a730-4dc7-a415-241f04c62e96] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0120 12:25:26.703833  580663 system_pods.go:61] "kube-apiserver-no-preload-677886" [dba4da15-817f-4cd9-9cf6-3b86c494c7d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0120 12:25:26.703851  580663 system_pods.go:61] "kube-controller-manager-no-preload-677886" [3010b348-847c-4c27-b60d-d69f8a145886] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0120 12:25:26.703860  580663 system_pods.go:61] "kube-proxy-9xrpd" [70e7b10c-60c6-4667-8ba2-76f7cd4857ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0120 12:25:26.703873  580663 system_pods.go:61] "kube-scheduler-no-preload-677886" [3788a16c-16fb-413a-a6e2-2e9a4e4d86ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0120 12:25:26.703884  580663 system_pods.go:61] "metrics-server-f79f97bbb-6hgwn" [96b61173-8260-4d4c-b87a-1fbeacc5e0e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 12:25:26.703894  580663 system_pods.go:61] "storage-provisioner" [f9580e57-1600-4be5-a8a6-c56d510ced4c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0120 12:25:26.703903  580663 system_pods.go:74] duration metric: took 10.231015ms to wait for pod list to return data ...
	I0120 12:25:26.703913  580663 node_conditions.go:102] verifying NodePressure condition ...
	I0120 12:25:26.709233  580663 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 12:25:26.709263  580663 node_conditions.go:123] node cpu capacity is 2
	I0120 12:25:26.709276  580663 node_conditions.go:105] duration metric: took 5.355597ms to run NodePressure ...
	I0120 12:25:26.709295  580663 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:25:27.005262  580663 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0120 12:25:27.009537  580663 kubeadm.go:739] kubelet initialised
	I0120 12:25:27.009557  580663 kubeadm.go:740] duration metric: took 4.264831ms waiting for restarted kubelet to initialise ...
	I0120 12:25:27.009565  580663 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:25:27.013597  580663 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-zb8zw" in "kube-system" namespace to be "Ready" ...
	I0120 12:25:29.020427  580663 pod_ready.go:103] pod "coredns-668d6bf9bc-zb8zw" in "kube-system" namespace has status "Ready":"False"
	I0120 12:25:31.021432  580663 pod_ready.go:103] pod "coredns-668d6bf9bc-zb8zw" in "kube-system" namespace has status "Ready":"False"
	I0120 12:25:32.020160  580663 pod_ready.go:93] pod "coredns-668d6bf9bc-zb8zw" in "kube-system" namespace has status "Ready":"True"
	I0120 12:25:32.020186  580663 pod_ready.go:82] duration metric: took 5.00656531s for pod "coredns-668d6bf9bc-zb8zw" in "kube-system" namespace to be "Ready" ...
	I0120 12:25:32.020197  580663 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-677886" in "kube-system" namespace to be "Ready" ...
	I0120 12:25:34.026830  580663 pod_ready.go:103] pod "etcd-no-preload-677886" in "kube-system" namespace has status "Ready":"False"
	I0120 12:25:36.027008  580663 pod_ready.go:103] pod "etcd-no-preload-677886" in "kube-system" namespace has status "Ready":"False"
	I0120 12:25:38.027568  580663 pod_ready.go:103] pod "etcd-no-preload-677886" in "kube-system" namespace has status "Ready":"False"
	I0120 12:25:40.529616  580663 pod_ready.go:103] pod "etcd-no-preload-677886" in "kube-system" namespace has status "Ready":"False"
	I0120 12:25:41.527275  580663 pod_ready.go:93] pod "etcd-no-preload-677886" in "kube-system" namespace has status "Ready":"True"
	I0120 12:25:41.527298  580663 pod_ready.go:82] duration metric: took 9.507094464s for pod "etcd-no-preload-677886" in "kube-system" namespace to be "Ready" ...
	I0120 12:25:41.527308  580663 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-677886" in "kube-system" namespace to be "Ready" ...
	I0120 12:25:41.532202  580663 pod_ready.go:93] pod "kube-apiserver-no-preload-677886" in "kube-system" namespace has status "Ready":"True"
	I0120 12:25:41.532228  580663 pod_ready.go:82] duration metric: took 4.913239ms for pod "kube-apiserver-no-preload-677886" in "kube-system" namespace to be "Ready" ...
	I0120 12:25:41.532238  580663 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-677886" in "kube-system" namespace to be "Ready" ...
	I0120 12:25:41.536384  580663 pod_ready.go:93] pod "kube-controller-manager-no-preload-677886" in "kube-system" namespace has status "Ready":"True"
	I0120 12:25:41.536403  580663 pod_ready.go:82] duration metric: took 4.158471ms for pod "kube-controller-manager-no-preload-677886" in "kube-system" namespace to be "Ready" ...
	I0120 12:25:41.536411  580663 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9xrpd" in "kube-system" namespace to be "Ready" ...
	I0120 12:25:41.540413  580663 pod_ready.go:93] pod "kube-proxy-9xrpd" in "kube-system" namespace has status "Ready":"True"
	I0120 12:25:41.540430  580663 pod_ready.go:82] duration metric: took 4.014364ms for pod "kube-proxy-9xrpd" in "kube-system" namespace to be "Ready" ...
	I0120 12:25:41.540438  580663 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-677886" in "kube-system" namespace to be "Ready" ...
	I0120 12:25:41.544348  580663 pod_ready.go:93] pod "kube-scheduler-no-preload-677886" in "kube-system" namespace has status "Ready":"True"
	I0120 12:25:41.544368  580663 pod_ready.go:82] duration metric: took 3.923918ms for pod "kube-scheduler-no-preload-677886" in "kube-system" namespace to be "Ready" ...
	I0120 12:25:41.544377  580663 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace to be "Ready" ...
	I0120 12:25:43.551462  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:25:46.052740  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:25:48.053396  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:25:50.551084  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:25:52.553112  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:25:55.051232  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:25:57.051510  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:25:59.055844  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:26:01.553091  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:26:04.051451  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:26:06.051745  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:26:08.051926  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:26:10.058147  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:26:12.552173  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:26:14.553469  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:26:17.051972  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:26:19.052257  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:26:21.551553  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:26:23.552130  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:26:26.051383  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:26:28.549742  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:26:30.551885  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:26:32.556125  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:26:35.054623  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:26:37.551532  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:26:39.551592  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:26:41.553899  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:26:44.050895  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:26:46.552836  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:26:48.553470  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:26:50.554840  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:26:53.053470  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:26:55.552983  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:26:58.054576  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:00.552438  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:02.554035  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:05.051995  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:07.053250  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:09.551608  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:12.052171  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:14.551916  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:16.553013  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:19.052605  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:21.553751  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:24.054433  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:26.551663  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:29.052843  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:31.053282  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:33.550594  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:35.551150  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:37.551800  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:40.050932  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:42.550828  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:44.551516  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:46.552551  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:49.051597  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:51.550614  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:53.550923  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:56.050037  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:58.051436  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:00.051609  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:02.551345  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:04.551710  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:07.051565  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:09.551406  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:12.051287  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:14.051571  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:16.550571  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:18.551384  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:21.052345  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:23.052988  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:25.552160  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:27.553119  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:30.052514  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:32.052597  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:34.550382  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:36.554593  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:39.052292  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:41.551156  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:43.552839  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:46.051011  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:48.051793  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:50.051883  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:52.052625  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:54.552862  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:56.596014  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:59.052473  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:01.053068  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:03.053535  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:05.551774  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:08.051998  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:10.052549  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:12.551545  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:15.052148  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:17.551185  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:19.552734  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:22.051159  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:24.053498  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:26.552235  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:29.051004  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:31.051485  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:33.551037  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:35.551680  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:38.051626  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:40.051943  580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:41.544634  580663 pod_ready.go:82] duration metric: took 4m0.00023314s for pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace to be "Ready" ...
	E0120 12:29:41.544663  580663 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0120 12:29:41.544691  580663 pod_ready.go:39] duration metric: took 4m14.535115442s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:29:41.544734  580663 kubeadm.go:597] duration metric: took 4m22.141964379s to restartPrimaryControlPlane
	W0120 12:29:41.544823  580663 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0120 12:29:41.544859  580663 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0120 12:29:43.325105  580663 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.780216325s)
	I0120 12:29:43.325179  580663 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 12:29:43.340601  580663 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 12:29:43.352006  580663 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:29:43.363189  580663 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:29:43.363210  580663 kubeadm.go:157] found existing configuration files:
	
	I0120 12:29:43.363265  580663 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 12:29:43.375237  580663 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:29:43.375301  580663 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:29:43.391031  580663 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 12:29:43.401786  580663 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:29:43.401871  580663 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:29:43.413048  580663 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 12:29:43.423854  580663 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:29:43.423932  580663 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:29:43.434619  580663 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 12:29:43.444908  580663 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:29:43.444978  580663 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:29:43.455919  580663 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 12:29:43.503019  580663 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 12:29:43.503090  580663 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 12:29:43.620840  580663 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 12:29:43.621013  580663 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 12:29:43.621138  580663 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 12:29:43.628035  580663 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 12:29:43.630145  580663 out.go:235]   - Generating certificates and keys ...
	I0120 12:29:43.630283  580663 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 12:29:43.630755  580663 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 12:29:43.630887  580663 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 12:29:43.631240  580663 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 12:29:43.631487  580663 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 12:29:43.631849  580663 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 12:29:43.632017  580663 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 12:29:43.632153  580663 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 12:29:43.632634  580663 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 12:29:43.632734  580663 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 12:29:43.632900  580663 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 12:29:43.632993  580663 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 12:29:43.958312  580663 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 12:29:44.044087  580663 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 12:29:44.320019  580663 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 12:29:44.451393  580663 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 12:29:44.716527  580663 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 12:29:44.717392  580663 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 12:29:44.721542  580663 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 12:29:44.723747  580663 out.go:235]   - Booting up control plane ...
	I0120 12:29:44.723867  580663 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 12:29:44.724452  580663 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 12:29:44.727368  580663 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 12:29:44.749031  580663 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 12:29:44.757092  580663 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 12:29:44.757174  580663 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 12:29:44.921783  580663 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 12:29:44.921993  580663 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 12:29:45.922247  580663 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001241539s
	I0120 12:29:45.922381  580663 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 12:29:50.926135  580663 kubeadm.go:310] [api-check] The API server is healthy after 5.002210497s
	I0120 12:29:50.937294  580663 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 12:29:50.956725  580663 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 12:29:51.003153  580663 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 12:29:51.003451  580663 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-677886 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 12:29:51.022662  580663 kubeadm.go:310] [bootstrap-token] Using token: yujpfs.k6ck90dtmo1yxa66
	I0120 12:29:51.024781  580663 out.go:235]   - Configuring RBAC rules ...
	I0120 12:29:51.024951  580663 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 12:29:51.037177  580663 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 12:29:51.051029  580663 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 12:29:51.060737  580663 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 12:29:51.066857  580663 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 12:29:51.073422  580663 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 12:29:51.331992  580663 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 12:29:51.780375  580663 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 12:29:52.331230  580663 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 12:29:52.333488  580663 kubeadm.go:310] 
	I0120 12:29:52.333590  580663 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 12:29:52.333620  580663 kubeadm.go:310] 
	I0120 12:29:52.333712  580663 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 12:29:52.333718  580663 kubeadm.go:310] 
	I0120 12:29:52.333740  580663 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 12:29:52.333797  580663 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 12:29:52.333881  580663 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 12:29:52.333892  580663 kubeadm.go:310] 
	I0120 12:29:52.333985  580663 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 12:29:52.334006  580663 kubeadm.go:310] 
	I0120 12:29:52.334077  580663 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 12:29:52.334089  580663 kubeadm.go:310] 
	I0120 12:29:52.334158  580663 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 12:29:52.334276  580663 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 12:29:52.334381  580663 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 12:29:52.334403  580663 kubeadm.go:310] 
	I0120 12:29:52.334505  580663 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 12:29:52.334611  580663 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 12:29:52.334628  580663 kubeadm.go:310] 
	I0120 12:29:52.334741  580663 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token yujpfs.k6ck90dtmo1yxa66 \
	I0120 12:29:52.334875  580663 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:326640d5f51fa6eddf3fd6f2b38f5a08d4608620129e8898d45359839be856c3 \
	I0120 12:29:52.334907  580663 kubeadm.go:310] 	--control-plane 
	I0120 12:29:52.334917  580663 kubeadm.go:310] 
	I0120 12:29:52.335036  580663 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 12:29:52.335047  580663 kubeadm.go:310] 
	I0120 12:29:52.335155  580663 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token yujpfs.k6ck90dtmo1yxa66 \
	I0120 12:29:52.335306  580663 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:326640d5f51fa6eddf3fd6f2b38f5a08d4608620129e8898d45359839be856c3 
	I0120 12:29:52.336641  580663 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 12:29:52.336671  580663 cni.go:84] Creating CNI manager for ""
	I0120 12:29:52.336684  580663 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0120 12:29:52.337989  580663 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 12:29:52.339338  580663 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 12:29:52.359963  580663 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 12:29:52.385108  580663 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 12:29:52.385173  580663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:29:52.385187  580663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-677886 minikube.k8s.io/updated_at=2025_01_20T12_29_52_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=77d80cf1517f5f1439721b28711982314b21bec9 minikube.k8s.io/name=no-preload-677886 minikube.k8s.io/primary=true
	I0120 12:29:52.700612  580663 ops.go:34] apiserver oom_adj: -16
	I0120 12:29:52.700716  580663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:29:53.201614  580663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:29:53.700980  580663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:29:54.200936  580663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:29:54.700963  580663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:29:55.200993  580663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:29:55.701788  580663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:29:55.818494  580663 kubeadm.go:1113] duration metric: took 3.433386907s to wait for elevateKubeSystemPrivileges
	I0120 12:29:55.818535  580663 kubeadm.go:394] duration metric: took 4m36.486654712s to StartCluster
	I0120 12:29:55.818555  580663 settings.go:142] acquiring lock: {Name:mkbafde306c71e7b8958e2377ddfa5a9e3a59113 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:29:55.818636  580663 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20151-530330/kubeconfig
	I0120 12:29:55.820492  580663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-530330/kubeconfig: {Name:mk113e13541afa8413ea8a359169b0824f5f9ef4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:29:55.827906  580663 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.157 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0120 12:29:55.828002  580663 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 12:29:55.828108  580663 addons.go:69] Setting storage-provisioner=true in profile "no-preload-677886"
	I0120 12:29:55.828131  580663 addons.go:238] Setting addon storage-provisioner=true in "no-preload-677886"
	W0120 12:29:55.828140  580663 addons.go:247] addon storage-provisioner should already be in state true
	I0120 12:29:55.828129  580663 addons.go:69] Setting default-storageclass=true in profile "no-preload-677886"
	I0120 12:29:55.828162  580663 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-677886"
	I0120 12:29:55.828176  580663 host.go:66] Checking if "no-preload-677886" exists ...
	I0120 12:29:55.828226  580663 config.go:182] Loaded profile config "no-preload-677886": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 12:29:55.828302  580663 addons.go:69] Setting dashboard=true in profile "no-preload-677886"
	I0120 12:29:55.828321  580663 addons.go:238] Setting addon dashboard=true in "no-preload-677886"
	W0120 12:29:55.828332  580663 addons.go:247] addon dashboard should already be in state true
	I0120 12:29:55.828362  580663 host.go:66] Checking if "no-preload-677886" exists ...
	I0120 12:29:55.828680  580663 addons.go:69] Setting metrics-server=true in profile "no-preload-677886"
	I0120 12:29:55.828718  580663 addons.go:238] Setting addon metrics-server=true in "no-preload-677886"
	W0120 12:29:55.828727  580663 addons.go:247] addon metrics-server should already be in state true
	I0120 12:29:55.828729  580663 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:29:55.828758  580663 host.go:66] Checking if "no-preload-677886" exists ...
	I0120 12:29:55.828773  580663 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:29:55.828790  580663 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:29:55.828838  580663 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:29:55.829142  580663 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:29:55.829171  580663 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:29:55.829387  580663 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:29:55.829436  580663 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:29:55.829964  580663 out.go:177] * Verifying Kubernetes components...
	I0120 12:29:55.831634  580663 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:29:55.847394  580663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45261
	I0120 12:29:55.847867  580663 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:29:55.848446  580663 main.go:141] libmachine: Using API Version  1
	I0120 12:29:55.848460  580663 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:29:55.848917  580663 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:29:55.849092  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetState
	I0120 12:29:55.849662  580663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35369
	I0120 12:29:55.850208  580663 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:29:55.850763  580663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36387
	I0120 12:29:55.850852  580663 main.go:141] libmachine: Using API Version  1
	I0120 12:29:55.850870  580663 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:29:55.851450  580663 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:29:55.851563  580663 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:29:55.852783  580663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38073
	I0120 12:29:55.852911  580663 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:29:55.852937  580663 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:29:55.853255  580663 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:29:55.853349  580663 addons.go:238] Setting addon default-storageclass=true in "no-preload-677886"
	W0120 12:29:55.853358  580663 addons.go:247] addon default-storageclass should already be in state true
	I0120 12:29:55.853380  580663 host.go:66] Checking if "no-preload-677886" exists ...
	I0120 12:29:55.853603  580663 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:29:55.853624  580663 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:29:55.854076  580663 main.go:141] libmachine: Using API Version  1
	I0120 12:29:55.854097  580663 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:29:55.854357  580663 main.go:141] libmachine: Using API Version  1
	I0120 12:29:55.854370  580663 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:29:55.854666  580663 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:29:55.854733  580663 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:29:55.855063  580663 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:29:55.855086  580663 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:29:55.855572  580663 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:29:55.855613  580663 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:29:55.871877  580663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46213
	I0120 12:29:55.872268  580663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42391
	I0120 12:29:55.872468  580663 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:29:55.872568  580663 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:29:55.873006  580663 main.go:141] libmachine: Using API Version  1
	I0120 12:29:55.873030  580663 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:29:55.873167  580663 main.go:141] libmachine: Using API Version  1
	I0120 12:29:55.873181  580663 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:29:55.873318  580663 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:29:55.873451  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetState
	I0120 12:29:55.873499  580663 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:29:55.874038  580663 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:29:55.874080  580663 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:29:55.875018  580663 main.go:141] libmachine: (no-preload-677886) Calling .DriverName
	I0120 12:29:55.877132  580663 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0120 12:29:55.877504  580663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40349
	I0120 12:29:55.877895  580663 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:29:55.878401  580663 main.go:141] libmachine: Using API Version  1
	I0120 12:29:55.878420  580663 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:29:55.878706  580663 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:29:55.878882  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetState
	I0120 12:29:55.879913  580663 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0120 12:29:55.880337  580663 main.go:141] libmachine: (no-preload-677886) Calling .DriverName
	I0120 12:29:55.881391  580663 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0120 12:29:55.881407  580663 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0120 12:29:55.881438  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHHostname
	I0120 12:29:55.882243  580663 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:29:55.883861  580663 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:29:55.883881  580663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 12:29:55.883898  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHHostname
	I0120 12:29:55.885880  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:29:55.886344  580663 main.go:141] libmachine: (no-preload-677886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:87:c0", ip: ""} in network mk-no-preload-677886: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:44 +0000 UTC Type:0 Mac:52:54:00:3c:87:c0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-677886 Clientid:01:52:54:00:3c:87:c0}
	I0120 12:29:55.886373  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined IP address 192.168.72.157 and MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:29:55.887207  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:29:55.887242  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHPort
	I0120 12:29:55.887401  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHKeyPath
	I0120 12:29:55.887748  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHUsername
	I0120 12:29:55.887820  580663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44947
	I0120 12:29:55.887996  580663 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/no-preload-677886/id_rsa Username:docker}
	I0120 12:29:55.888347  580663 main.go:141] libmachine: (no-preload-677886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:87:c0", ip: ""} in network mk-no-preload-677886: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:44 +0000 UTC Type:0 Mac:52:54:00:3c:87:c0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-677886 Clientid:01:52:54:00:3c:87:c0}
	I0120 12:29:55.888359  580663 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:29:55.888385  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined IP address 192.168.72.157 and MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:29:55.888584  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHPort
	I0120 12:29:55.888739  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHKeyPath
	I0120 12:29:55.888859  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHUsername
	I0120 12:29:55.888974  580663 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/no-preload-677886/id_rsa Username:docker}
	I0120 12:29:55.889346  580663 main.go:141] libmachine: Using API Version  1
	I0120 12:29:55.889369  580663 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:29:55.889703  580663 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:29:55.890041  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetState
	I0120 12:29:55.891415  580663 main.go:141] libmachine: (no-preload-677886) Calling .DriverName
	I0120 12:29:55.893032  580663 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0120 12:29:55.894459  580663 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 12:29:55.894480  580663 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 12:29:55.894500  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHHostname
	I0120 12:29:55.897523  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:29:55.897980  580663 main.go:141] libmachine: (no-preload-677886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:87:c0", ip: ""} in network mk-no-preload-677886: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:44 +0000 UTC Type:0 Mac:52:54:00:3c:87:c0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-677886 Clientid:01:52:54:00:3c:87:c0}
	I0120 12:29:55.897996  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined IP address 192.168.72.157 and MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:29:55.898142  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHPort
	I0120 12:29:55.898751  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHKeyPath
	I0120 12:29:55.898981  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHUsername
	I0120 12:29:55.899163  580663 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/no-preload-677886/id_rsa Username:docker}
	I0120 12:29:55.906419  580663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37287
	I0120 12:29:55.906839  580663 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:29:55.907284  580663 main.go:141] libmachine: Using API Version  1
	I0120 12:29:55.907303  580663 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:29:55.907783  580663 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:29:55.907939  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetState
	I0120 12:29:55.909544  580663 main.go:141] libmachine: (no-preload-677886) Calling .DriverName
	I0120 12:29:55.909819  580663 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 12:29:55.909838  580663 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 12:29:55.909858  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHHostname
	I0120 12:29:55.912395  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:29:55.912786  580663 main.go:141] libmachine: (no-preload-677886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:87:c0", ip: ""} in network mk-no-preload-677886: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:44 +0000 UTC Type:0 Mac:52:54:00:3c:87:c0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-677886 Clientid:01:52:54:00:3c:87:c0}
	I0120 12:29:55.912812  580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined IP address 192.168.72.157 and MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
	I0120 12:29:55.912976  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHPort
	I0120 12:29:55.913163  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHKeyPath
	I0120 12:29:55.913339  580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHUsername
	I0120 12:29:55.913459  580663 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/no-preload-677886/id_rsa Username:docker}
	I0120 12:29:56.070157  580663 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:29:56.091475  580663 node_ready.go:35] waiting up to 6m0s for node "no-preload-677886" to be "Ready" ...
	I0120 12:29:56.116298  580663 node_ready.go:49] node "no-preload-677886" has status "Ready":"True"
	I0120 12:29:56.116329  580663 node_ready.go:38] duration metric: took 24.817971ms for node "no-preload-677886" to be "Ready" ...
	I0120 12:29:56.116344  580663 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:29:56.122752  580663 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-677886" in "kube-system" namespace to be "Ready" ...
	I0120 12:29:56.163838  580663 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0120 12:29:56.163872  580663 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0120 12:29:56.176791  580663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 12:29:56.192766  580663 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0120 12:29:56.192793  580663 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0120 12:29:56.247589  580663 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 12:29:56.247617  580663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0120 12:29:56.259937  580663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:29:56.262988  580663 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0120 12:29:56.263013  580663 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0120 12:29:56.291947  580663 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 12:29:56.291975  580663 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 12:29:56.334662  580663 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0120 12:29:56.334684  580663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0120 12:29:56.346674  580663 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 12:29:56.346705  580663 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 12:29:56.406320  580663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 12:29:56.435903  580663 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0120 12:29:56.435941  580663 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0120 12:29:56.520423  580663 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0120 12:29:56.520450  580663 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0120 12:29:56.549376  580663 main.go:141] libmachine: Making call to close driver server
	I0120 12:29:56.549414  580663 main.go:141] libmachine: (no-preload-677886) Calling .Close
	I0120 12:29:56.549765  580663 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:29:56.549785  580663 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:29:56.549795  580663 main.go:141] libmachine: Making call to close driver server
	I0120 12:29:56.549817  580663 main.go:141] libmachine: (no-preload-677886) Calling .Close
	I0120 12:29:56.549838  580663 main.go:141] libmachine: (no-preload-677886) DBG | Closing plugin on server side
	I0120 12:29:56.550308  580663 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:29:56.550325  580663 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:29:56.565213  580663 main.go:141] libmachine: Making call to close driver server
	I0120 12:29:56.565245  580663 main.go:141] libmachine: (no-preload-677886) Calling .Close
	I0120 12:29:56.565606  580663 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:29:56.565629  580663 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:29:56.619007  580663 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0120 12:29:56.619039  580663 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0120 12:29:56.732894  580663 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0120 12:29:56.732942  580663 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0120 12:29:56.864261  580663 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 12:29:56.864282  580663 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0120 12:29:56.893833  580663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 12:29:57.402805  580663 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.142824049s)
	I0120 12:29:57.402860  580663 main.go:141] libmachine: Making call to close driver server
	I0120 12:29:57.402872  580663 main.go:141] libmachine: (no-preload-677886) Calling .Close
	I0120 12:29:57.403187  580663 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:29:57.403224  580663 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:29:57.403228  580663 main.go:141] libmachine: (no-preload-677886) DBG | Closing plugin on server side
	I0120 12:29:57.403240  580663 main.go:141] libmachine: Making call to close driver server
	I0120 12:29:57.403251  580663 main.go:141] libmachine: (no-preload-677886) Calling .Close
	I0120 12:29:57.403645  580663 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:29:57.403661  580663 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:29:58.156343  580663 pod_ready.go:103] pod "etcd-no-preload-677886" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:58.212022  580663 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.805656545s)
	I0120 12:29:58.212073  580663 main.go:141] libmachine: Making call to close driver server
	I0120 12:29:58.212089  580663 main.go:141] libmachine: (no-preload-677886) Calling .Close
	I0120 12:29:58.212421  580663 main.go:141] libmachine: (no-preload-677886) DBG | Closing plugin on server side
	I0120 12:29:58.212472  580663 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:29:58.212484  580663 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:29:58.212492  580663 main.go:141] libmachine: Making call to close driver server
	I0120 12:29:58.212502  580663 main.go:141] libmachine: (no-preload-677886) Calling .Close
	I0120 12:29:58.212754  580663 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:29:58.212776  580663 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:29:58.212787  580663 addons.go:479] Verifying addon metrics-server=true in "no-preload-677886"
	I0120 12:29:59.132234  580663 pod_ready.go:93] pod "etcd-no-preload-677886" in "kube-system" namespace has status "Ready":"True"
	I0120 12:29:59.132257  580663 pod_ready.go:82] duration metric: took 3.009475203s for pod "etcd-no-preload-677886" in "kube-system" namespace to be "Ready" ...
	I0120 12:29:59.132266  580663 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-677886" in "kube-system" namespace to be "Ready" ...
	I0120 12:29:59.535990  580663 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.642103584s)
	I0120 12:29:59.536050  580663 main.go:141] libmachine: Making call to close driver server
	I0120 12:29:59.536065  580663 main.go:141] libmachine: (no-preload-677886) Calling .Close
	I0120 12:29:59.537910  580663 main.go:141] libmachine: (no-preload-677886) DBG | Closing plugin on server side
	I0120 12:29:59.537945  580663 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:29:59.537960  580663 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:29:59.537969  580663 main.go:141] libmachine: Making call to close driver server
	I0120 12:29:59.537974  580663 main.go:141] libmachine: (no-preload-677886) Calling .Close
	I0120 12:29:59.540301  580663 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:29:59.540320  580663 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:29:59.542169  580663 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-677886 addons enable metrics-server
	
	I0120 12:29:59.543685  580663 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0120 12:29:59.544960  580663 addons.go:514] duration metric: took 3.716966822s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0120 12:30:01.140019  580663 pod_ready.go:103] pod "kube-apiserver-no-preload-677886" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:01.640096  580663 pod_ready.go:93] pod "kube-apiserver-no-preload-677886" in "kube-system" namespace has status "Ready":"True"
	I0120 12:30:01.640124  580663 pod_ready.go:82] duration metric: took 2.507849401s for pod "kube-apiserver-no-preload-677886" in "kube-system" namespace to be "Ready" ...
	I0120 12:30:01.640139  580663 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-677886" in "kube-system" namespace to be "Ready" ...
	I0120 12:30:02.647785  580663 pod_ready.go:93] pod "kube-controller-manager-no-preload-677886" in "kube-system" namespace has status "Ready":"True"
	I0120 12:30:02.647813  580663 pod_ready.go:82] duration metric: took 1.007665809s for pod "kube-controller-manager-no-preload-677886" in "kube-system" namespace to be "Ready" ...
	I0120 12:30:02.647829  580663 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-677886" in "kube-system" namespace to be "Ready" ...
	I0120 12:30:02.652782  580663 pod_ready.go:93] pod "kube-scheduler-no-preload-677886" in "kube-system" namespace has status "Ready":"True"
	I0120 12:30:02.652809  580663 pod_ready.go:82] duration metric: took 4.97098ms for pod "kube-scheduler-no-preload-677886" in "kube-system" namespace to be "Ready" ...
	I0120 12:30:02.652821  580663 pod_ready.go:39] duration metric: took 6.536455725s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:30:02.652839  580663 api_server.go:52] waiting for apiserver process to appear ...
	I0120 12:30:02.652893  580663 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:30:02.669503  580663 api_server.go:72] duration metric: took 6.84155672s to wait for apiserver process to appear ...
	I0120 12:30:02.669532  580663 api_server.go:88] waiting for apiserver healthz status ...
	I0120 12:30:02.669555  580663 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0120 12:30:02.674523  580663 api_server.go:279] https://192.168.72.157:8443/healthz returned 200:
	ok
	I0120 12:30:02.675672  580663 api_server.go:141] control plane version: v1.32.0
	I0120 12:30:02.675695  580663 api_server.go:131] duration metric: took 6.15459ms to wait for apiserver health ...
	I0120 12:30:02.675705  580663 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 12:30:02.680997  580663 system_pods.go:59] 9 kube-system pods found
	I0120 12:30:02.681020  580663 system_pods.go:61] "coredns-668d6bf9bc-9xmv8" [341d3c31-11b2-4764-98bf-e97ec1a50fd2] Running
	I0120 12:30:02.681025  580663 system_pods.go:61] "coredns-668d6bf9bc-wsnqr" [be77eebd-ba8c-42a5-acf0-dbe37c295e78] Running
	I0120 12:30:02.681028  580663 system_pods.go:61] "etcd-no-preload-677886" [6df18fe2-2b6d-4ffb-8f91-ce21e0adc82c] Running
	I0120 12:30:02.681032  580663 system_pods.go:61] "kube-apiserver-no-preload-677886" [db6208f0-66c4-46d0-9ee8-5dfe2a6ba67e] Running
	I0120 12:30:02.681036  580663 system_pods.go:61] "kube-controller-manager-no-preload-677886" [bc9fd099-51fd-4d05-b8b2-496516d0afdd] Running
	I0120 12:30:02.681039  580663 system_pods.go:61] "kube-proxy-7mw9s" [c53d64fd-036a-45a3-bef6-852216c16650] Running
	I0120 12:30:02.681042  580663 system_pods.go:61] "kube-scheduler-no-preload-677886" [9ff2c632-77fa-4591-9d06-597df8321a9b] Running
	I0120 12:30:02.681047  580663 system_pods.go:61] "metrics-server-f79f97bbb-4c528" [c970f3ba-5f5c-4cc5-8a4e-99fb56ba8778] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 12:30:02.681051  580663 system_pods.go:61] "storage-provisioner" [0df3fd9b-b206-4ecd-86cb-60d39e1bf6c1] Running
	I0120 12:30:02.681057  580663 system_pods.go:74] duration metric: took 5.346355ms to wait for pod list to return data ...
	I0120 12:30:02.681065  580663 default_sa.go:34] waiting for default service account to be created ...
	I0120 12:30:02.683574  580663 default_sa.go:45] found service account: "default"
	I0120 12:30:02.683592  580663 default_sa.go:55] duration metric: took 2.522551ms for default service account to be created ...
	I0120 12:30:02.683599  580663 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 12:30:02.689661  580663 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p no-preload-677886 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.0": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-677886 -n no-preload-677886
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-677886 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-677886 logs -n 25: (1.347010533s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-912009                             | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC |                     |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/kube-flannel/cni-conf.json                      |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009 sudo                        | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
	|         | systemctl status kubelet --all                       |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009                             | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
	|         | sudo systemctl cat kubelet                           |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009 sudo                        | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
	|         | journalctl -xeu kubelet --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009                             | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009                             | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009 sudo                        | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC |                     |
	|         | systemctl status docker --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009                             | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
	|         | sudo systemctl cat docker                            |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009 sudo                        | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
	|         | cat /etc/docker/daemon.json                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009 sudo                        | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC |                     |
	|         | docker system info                                   |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009 sudo                        | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC |                     |
	|         | systemctl status cri-docker                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009                             | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
	|         | sudo systemctl cat cri-docker                        |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009 sudo cat                    | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009 sudo cat                    | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009 sudo                        | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009 sudo                        | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
	|         | systemctl status containerd                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009                             | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
	|         | sudo systemctl cat containerd                        |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009 sudo cat                    | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009                             | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009 sudo                        | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
	|         | containerd config dump                               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009 sudo                        | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC |                     |
	|         | systemctl status crio --all                          |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009 sudo                        | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
	|         | systemctl cat crio --no-pager                        |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009 sudo                        | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
	|         | find /etc/crio -type f -exec                         |                       |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009 sudo                        | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
	|         | crio config                                          |                       |         |         |                     |                     |
	| delete  | -p custom-flannel-912009                             | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 12:34:55
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 12:34:55.317626  593695 out.go:345] Setting OutFile to fd 1 ...
	I0120 12:34:55.318098  593695 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:34:55.318140  593695 out.go:358] Setting ErrFile to fd 2...
	I0120 12:34:55.318166  593695 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:34:55.318820  593695 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-530330/.minikube/bin
	I0120 12:34:55.319727  593695 out.go:352] Setting JSON to false
	I0120 12:34:55.321284  593695 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8237,"bootTime":1737368258,"procs":301,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 12:34:55.321400  593695 start.go:139] virtualization: kvm guest
	I0120 12:34:55.323443  593695 out.go:177] * [custom-flannel-912009] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 12:34:55.325326  593695 out.go:177]   - MINIKUBE_LOCATION=20151
	I0120 12:34:55.325338  593695 notify.go:220] Checking for updates...
	I0120 12:34:55.328258  593695 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 12:34:55.329657  593695 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20151-530330/kubeconfig
	I0120 12:34:55.331093  593695 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-530330/.minikube
	I0120 12:34:55.332440  593695 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 12:34:55.333657  593695 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 12:34:55.335502  593695 config.go:182] Loaded profile config "calico-912009": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 12:34:55.335654  593695 config.go:182] Loaded profile config "embed-certs-565837": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 12:34:55.335772  593695 config.go:182] Loaded profile config "no-preload-677886": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 12:34:55.335906  593695 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 12:34:55.378824  593695 out.go:177] * Using the kvm2 driver based on user configuration
	I0120 12:34:55.380206  593695 start.go:297] selected driver: kvm2
	I0120 12:34:55.380226  593695 start.go:901] validating driver "kvm2" against <nil>
	I0120 12:34:55.380239  593695 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 12:34:55.380924  593695 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:34:55.380997  593695 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20151-530330/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 12:34:55.398891  593695 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 12:34:55.398946  593695 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 12:34:55.399228  593695 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 12:34:55.399267  593695 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0120 12:34:55.399286  593695 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0120 12:34:55.399352  593695 start.go:340] cluster config:
	{Name:custom-flannel-912009 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:custom-flannel-912009 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:34:55.399486  593695 iso.go:125] acquiring lock: {Name:mk734d848ce0e9a68d8d00ecbd0f5085f599b42f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:34:55.402211  593695 out.go:177] * Starting "custom-flannel-912009" primary control-plane node in "custom-flannel-912009" cluster
	I0120 12:34:55.403487  593695 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
	I0120 12:34:55.403526  593695 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20151-530330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-amd64.tar.lz4
	I0120 12:34:55.403534  593695 cache.go:56] Caching tarball of preloaded images
	I0120 12:34:55.403644  593695 preload.go:172] Found /home/jenkins/minikube-integration/20151-530330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0120 12:34:55.403657  593695 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on containerd
	I0120 12:34:55.403760  593695 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/config.json ...
	I0120 12:34:55.403781  593695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/config.json: {Name:mk1f5bd3895f8f37884cdb08f1e892c201dc31bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:34:55.403947  593695 start.go:360] acquireMachinesLock for custom-flannel-912009: {Name:mkcd5f2d114897136dd2343f9fcf468e718657e6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 12:34:55.403984  593695 start.go:364] duration metric: took 19.852µs to acquireMachinesLock for "custom-flannel-912009"
	I0120 12:34:55.404004  593695 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-912009 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:custom-flanne
l-912009 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0120 12:34:55.404078  593695 start.go:125] createHost starting for "" (driver="kvm2")
	I0120 12:34:54.418015  591909 node_ready.go:53] node "calico-912009" has status "Ready":"False"
	I0120 12:34:56.418900  591909 node_ready.go:53] node "calico-912009" has status "Ready":"False"
	I0120 12:34:58.918122  591909 node_ready.go:53] node "calico-912009" has status "Ready":"False"
	I0120 12:34:55.405689  593695 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0120 12:34:55.405857  593695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:34:55.405898  593695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:34:55.421394  593695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46729
	I0120 12:34:55.421940  593695 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:34:55.422589  593695 main.go:141] libmachine: Using API Version  1
	I0120 12:34:55.422629  593695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:34:55.423222  593695 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:34:55.423525  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetMachineName
	I0120 12:34:55.423711  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .DriverName
	I0120 12:34:55.423949  593695 start.go:159] libmachine.API.Create for "custom-flannel-912009" (driver="kvm2")
	I0120 12:34:55.424001  593695 client.go:168] LocalClient.Create starting
	I0120 12:34:55.424053  593695 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca.pem
	I0120 12:34:55.424104  593695 main.go:141] libmachine: Decoding PEM data...
	I0120 12:34:55.424127  593695 main.go:141] libmachine: Parsing certificate...
	I0120 12:34:55.424219  593695 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20151-530330/.minikube/certs/cert.pem
	I0120 12:34:55.424244  593695 main.go:141] libmachine: Decoding PEM data...
	I0120 12:34:55.424262  593695 main.go:141] libmachine: Parsing certificate...
	I0120 12:34:55.424287  593695 main.go:141] libmachine: Running pre-create checks...
	I0120 12:34:55.424305  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .PreCreateCheck
	I0120 12:34:55.424734  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetConfigRaw
	I0120 12:34:55.425305  593695 main.go:141] libmachine: Creating machine...
	I0120 12:34:55.425318  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .Create
	I0120 12:34:55.425495  593695 main.go:141] libmachine: (custom-flannel-912009) creating KVM machine...
	I0120 12:34:55.425519  593695 main.go:141] libmachine: (custom-flannel-912009) creating network...
	I0120 12:34:55.426842  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found existing default KVM network
	I0120 12:34:55.428088  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:34:55.427921  593717 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:32:62:a8} reservation:<nil>}
	I0120 12:34:55.429366  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:34:55.429267  593717 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001194e0}
	I0120 12:34:55.429388  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | created network xml: 
	I0120 12:34:55.429399  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | <network>
	I0120 12:34:55.429409  593695 main.go:141] libmachine: (custom-flannel-912009) DBG |   <name>mk-custom-flannel-912009</name>
	I0120 12:34:55.429417  593695 main.go:141] libmachine: (custom-flannel-912009) DBG |   <dns enable='no'/>
	I0120 12:34:55.429422  593695 main.go:141] libmachine: (custom-flannel-912009) DBG |   
	I0120 12:34:55.429440  593695 main.go:141] libmachine: (custom-flannel-912009) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0120 12:34:55.429448  593695 main.go:141] libmachine: (custom-flannel-912009) DBG |     <dhcp>
	I0120 12:34:55.429459  593695 main.go:141] libmachine: (custom-flannel-912009) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0120 12:34:55.429475  593695 main.go:141] libmachine: (custom-flannel-912009) DBG |     </dhcp>
	I0120 12:34:55.429487  593695 main.go:141] libmachine: (custom-flannel-912009) DBG |   </ip>
	I0120 12:34:55.429497  593695 main.go:141] libmachine: (custom-flannel-912009) DBG |   
	I0120 12:34:55.429513  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | </network>
	I0120 12:34:55.429524  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | 
	I0120 12:34:55.434573  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | trying to create private KVM network mk-custom-flannel-912009 192.168.50.0/24...
	I0120 12:34:55.523742  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | private KVM network mk-custom-flannel-912009 192.168.50.0/24 created
	I0120 12:34:55.523770  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:34:55.523396  593717 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20151-530330/.minikube
	I0120 12:34:55.523822  593695 main.go:141] libmachine: (custom-flannel-912009) setting up store path in /home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009 ...
	I0120 12:34:55.523855  593695 main.go:141] libmachine: (custom-flannel-912009) building disk image from file:///home/jenkins/minikube-integration/20151-530330/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0120 12:34:55.523992  593695 main.go:141] libmachine: (custom-flannel-912009) Downloading /home/jenkins/minikube-integration/20151-530330/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20151-530330/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0120 12:34:55.815001  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:34:55.814810  593717 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/id_rsa...
	I0120 12:34:56.245898  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:34:56.245727  593717 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/custom-flannel-912009.rawdisk...
	I0120 12:34:56.245930  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | Writing magic tar header
	I0120 12:34:56.245949  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | Writing SSH key tar header
	I0120 12:34:56.245964  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:34:56.245896  593717 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009 ...
	I0120 12:34:56.245994  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009
	I0120 12:34:56.246097  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-530330/.minikube/machines
	I0120 12:34:56.246128  593695 main.go:141] libmachine: (custom-flannel-912009) setting executable bit set on /home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009 (perms=drwx------)
	I0120 12:34:56.246141  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-530330/.minikube
	I0120 12:34:56.246172  593695 main.go:141] libmachine: (custom-flannel-912009) setting executable bit set on /home/jenkins/minikube-integration/20151-530330/.minikube/machines (perms=drwxr-xr-x)
	I0120 12:34:56.246200  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-530330
	I0120 12:34:56.246212  593695 main.go:141] libmachine: (custom-flannel-912009) setting executable bit set on /home/jenkins/minikube-integration/20151-530330/.minikube (perms=drwxr-xr-x)
	I0120 12:34:56.246229  593695 main.go:141] libmachine: (custom-flannel-912009) setting executable bit set on /home/jenkins/minikube-integration/20151-530330 (perms=drwxrwxr-x)
	I0120 12:34:56.246238  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0120 12:34:56.246247  593695 main.go:141] libmachine: (custom-flannel-912009) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0120 12:34:56.246258  593695 main.go:141] libmachine: (custom-flannel-912009) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0120 12:34:56.246265  593695 main.go:141] libmachine: (custom-flannel-912009) creating domain...
	I0120 12:34:56.246277  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | checking permissions on dir: /home/jenkins
	I0120 12:34:56.246285  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | checking permissions on dir: /home
	I0120 12:34:56.246295  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | skipping /home - not owner
	I0120 12:34:56.247428  593695 main.go:141] libmachine: (custom-flannel-912009) define libvirt domain using xml: 
	I0120 12:34:56.247449  593695 main.go:141] libmachine: (custom-flannel-912009) <domain type='kvm'>
	I0120 12:34:56.247459  593695 main.go:141] libmachine: (custom-flannel-912009)   <name>custom-flannel-912009</name>
	I0120 12:34:56.247467  593695 main.go:141] libmachine: (custom-flannel-912009)   <memory unit='MiB'>3072</memory>
	I0120 12:34:56.247482  593695 main.go:141] libmachine: (custom-flannel-912009)   <vcpu>2</vcpu>
	I0120 12:34:56.247493  593695 main.go:141] libmachine: (custom-flannel-912009)   <features>
	I0120 12:34:56.247502  593695 main.go:141] libmachine: (custom-flannel-912009)     <acpi/>
	I0120 12:34:56.247525  593695 main.go:141] libmachine: (custom-flannel-912009)     <apic/>
	I0120 12:34:56.247552  593695 main.go:141] libmachine: (custom-flannel-912009)     <pae/>
	I0120 12:34:56.247575  593695 main.go:141] libmachine: (custom-flannel-912009)     
	I0120 12:34:56.247586  593695 main.go:141] libmachine: (custom-flannel-912009)   </features>
	I0120 12:34:56.247595  593695 main.go:141] libmachine: (custom-flannel-912009)   <cpu mode='host-passthrough'>
	I0120 12:34:56.247606  593695 main.go:141] libmachine: (custom-flannel-912009)   
	I0120 12:34:56.247615  593695 main.go:141] libmachine: (custom-flannel-912009)   </cpu>
	I0120 12:34:56.247625  593695 main.go:141] libmachine: (custom-flannel-912009)   <os>
	I0120 12:34:56.247635  593695 main.go:141] libmachine: (custom-flannel-912009)     <type>hvm</type>
	I0120 12:34:56.247644  593695 main.go:141] libmachine: (custom-flannel-912009)     <boot dev='cdrom'/>
	I0120 12:34:56.247658  593695 main.go:141] libmachine: (custom-flannel-912009)     <boot dev='hd'/>
	I0120 12:34:56.247670  593695 main.go:141] libmachine: (custom-flannel-912009)     <bootmenu enable='no'/>
	I0120 12:34:56.247682  593695 main.go:141] libmachine: (custom-flannel-912009)   </os>
	I0120 12:34:56.247690  593695 main.go:141] libmachine: (custom-flannel-912009)   <devices>
	I0120 12:34:56.247701  593695 main.go:141] libmachine: (custom-flannel-912009)     <disk type='file' device='cdrom'>
	I0120 12:34:56.247717  593695 main.go:141] libmachine: (custom-flannel-912009)       <source file='/home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/boot2docker.iso'/>
	I0120 12:34:56.247732  593695 main.go:141] libmachine: (custom-flannel-912009)       <target dev='hdc' bus='scsi'/>
	I0120 12:34:56.247741  593695 main.go:141] libmachine: (custom-flannel-912009)       <readonly/>
	I0120 12:34:56.247748  593695 main.go:141] libmachine: (custom-flannel-912009)     </disk>
	I0120 12:34:56.247776  593695 main.go:141] libmachine: (custom-flannel-912009)     <disk type='file' device='disk'>
	I0120 12:34:56.247790  593695 main.go:141] libmachine: (custom-flannel-912009)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0120 12:34:56.247828  593695 main.go:141] libmachine: (custom-flannel-912009)       <source file='/home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/custom-flannel-912009.rawdisk'/>
	I0120 12:34:56.247852  593695 main.go:141] libmachine: (custom-flannel-912009)       <target dev='hda' bus='virtio'/>
	I0120 12:34:56.247876  593695 main.go:141] libmachine: (custom-flannel-912009)     </disk>
	I0120 12:34:56.247896  593695 main.go:141] libmachine: (custom-flannel-912009)     <interface type='network'>
	I0120 12:34:56.247910  593695 main.go:141] libmachine: (custom-flannel-912009)       <source network='mk-custom-flannel-912009'/>
	I0120 12:34:56.247921  593695 main.go:141] libmachine: (custom-flannel-912009)       <model type='virtio'/>
	I0120 12:34:56.247932  593695 main.go:141] libmachine: (custom-flannel-912009)     </interface>
	I0120 12:34:56.247939  593695 main.go:141] libmachine: (custom-flannel-912009)     <interface type='network'>
	I0120 12:34:56.247951  593695 main.go:141] libmachine: (custom-flannel-912009)       <source network='default'/>
	I0120 12:34:56.247968  593695 main.go:141] libmachine: (custom-flannel-912009)       <model type='virtio'/>
	I0120 12:34:56.247979  593695 main.go:141] libmachine: (custom-flannel-912009)     </interface>
	I0120 12:34:56.247989  593695 main.go:141] libmachine: (custom-flannel-912009)     <serial type='pty'>
	I0120 12:34:56.247999  593695 main.go:141] libmachine: (custom-flannel-912009)       <target port='0'/>
	I0120 12:34:56.248009  593695 main.go:141] libmachine: (custom-flannel-912009)     </serial>
	I0120 12:34:56.248018  593695 main.go:141] libmachine: (custom-flannel-912009)     <console type='pty'>
	I0120 12:34:56.248033  593695 main.go:141] libmachine: (custom-flannel-912009)       <target type='serial' port='0'/>
	I0120 12:34:56.248044  593695 main.go:141] libmachine: (custom-flannel-912009)     </console>
	I0120 12:34:56.248063  593695 main.go:141] libmachine: (custom-flannel-912009)     <rng model='virtio'>
	I0120 12:34:56.248077  593695 main.go:141] libmachine: (custom-flannel-912009)       <backend model='random'>/dev/random</backend>
	I0120 12:34:56.248087  593695 main.go:141] libmachine: (custom-flannel-912009)     </rng>
	I0120 12:34:56.248098  593695 main.go:141] libmachine: (custom-flannel-912009)     
	I0120 12:34:56.248108  593695 main.go:141] libmachine: (custom-flannel-912009)     
	I0120 12:34:56.248126  593695 main.go:141] libmachine: (custom-flannel-912009)   </devices>
	I0120 12:34:56.248143  593695 main.go:141] libmachine: (custom-flannel-912009) </domain>
	I0120 12:34:56.248157  593695 main.go:141] libmachine: (custom-flannel-912009) 
	I0120 12:34:56.251886  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:5c:75:87 in network default
	I0120 12:34:56.252644  593695 main.go:141] libmachine: (custom-flannel-912009) starting domain...
	I0120 12:34:56.252667  593695 main.go:141] libmachine: (custom-flannel-912009) ensuring networks are active...
	I0120 12:34:56.252679  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:34:56.253478  593695 main.go:141] libmachine: (custom-flannel-912009) Ensuring network default is active
	I0120 12:34:56.253856  593695 main.go:141] libmachine: (custom-flannel-912009) Ensuring network mk-custom-flannel-912009 is active
	I0120 12:34:56.254478  593695 main.go:141] libmachine: (custom-flannel-912009) getting domain XML...
	I0120 12:34:56.255132  593695 main.go:141] libmachine: (custom-flannel-912009) creating domain...
	I0120 12:34:57.617443  593695 main.go:141] libmachine: (custom-flannel-912009) waiting for IP...
	I0120 12:34:57.618468  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:34:57.618975  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
	I0120 12:34:57.619079  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:34:57.618982  593717 retry.go:31] will retry after 310.833975ms: waiting for domain to come up
	I0120 12:34:57.931884  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:34:57.932609  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
	I0120 12:34:57.932671  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:34:57.932587  593717 retry.go:31] will retry after 389.24926ms: waiting for domain to come up
	I0120 12:34:58.323123  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:34:58.323741  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
	I0120 12:34:58.323766  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:34:58.323662  593717 retry.go:31] will retry after 328.51544ms: waiting for domain to come up
	I0120 12:34:58.654475  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:34:58.654999  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
	I0120 12:34:58.655031  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:34:58.654972  593717 retry.go:31] will retry after 459.188002ms: waiting for domain to come up
	I0120 12:34:59.115485  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:34:59.116075  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
	I0120 12:34:59.116099  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:34:59.116039  593717 retry.go:31] will retry after 671.328829ms: waiting for domain to come up
	I0120 12:34:59.788826  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:34:59.789486  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
	I0120 12:34:59.789535  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:34:59.789441  593717 retry.go:31] will retry after 722.417342ms: waiting for domain to come up
	I0120 12:35:00.417246  591909 node_ready.go:49] node "calico-912009" has status "Ready":"True"
	I0120 12:35:00.417269  591909 node_ready.go:38] duration metric: took 8.003348027s for node "calico-912009" to be "Ready" ...
	I0120 12:35:00.417280  591909 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:35:00.427079  591909 pod_ready.go:79] waiting up to 15m0s for pod "calico-kube-controllers-5745477d4d-mz446" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:02.434616  591909 pod_ready.go:103] pod "calico-kube-controllers-5745477d4d-mz446" in "kube-system" namespace has status "Ready":"False"
	I0120 12:35:00.513299  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:00.513926  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
	I0120 12:35:00.513953  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:35:00.513882  593717 retry.go:31] will retry after 1.004102642s: waiting for domain to come up
	I0120 12:35:01.520257  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:01.520856  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
	I0120 12:35:01.520887  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:35:01.520792  593717 retry.go:31] will retry after 1.187548146s: waiting for domain to come up
	I0120 12:35:02.710370  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:02.710926  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
	I0120 12:35:02.710960  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:35:02.710891  593717 retry.go:31] will retry after 1.130666152s: waiting for domain to come up
	I0120 12:35:03.843031  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:03.843591  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
	I0120 12:35:03.843657  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:35:03.843573  593717 retry.go:31] will retry after 2.084857552s: waiting for domain to come up
	I0120 12:35:04.932987  591909 pod_ready.go:103] pod "calico-kube-controllers-5745477d4d-mz446" in "kube-system" namespace has status "Ready":"False"
	I0120 12:35:06.934911  591909 pod_ready.go:103] pod "calico-kube-controllers-5745477d4d-mz446" in "kube-system" namespace has status "Ready":"False"
	I0120 12:35:05.930313  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:05.930995  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
	I0120 12:35:05.931129  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:35:05.931024  593717 retry.go:31] will retry after 2.721943033s: waiting for domain to come up
	I0120 12:35:08.655556  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:08.656095  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
	I0120 12:35:08.656125  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:35:08.656041  593717 retry.go:31] will retry after 3.50397462s: waiting for domain to come up
	I0120 12:35:09.434933  591909 pod_ready.go:103] pod "calico-kube-controllers-5745477d4d-mz446" in "kube-system" namespace has status "Ready":"False"
	I0120 12:35:11.938250  591909 pod_ready.go:103] pod "calico-kube-controllers-5745477d4d-mz446" in "kube-system" namespace has status "Ready":"False"
	I0120 12:35:12.161925  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:12.162527  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
	I0120 12:35:12.162555  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:35:12.162507  593717 retry.go:31] will retry after 4.028021149s: waiting for domain to come up
	I0120 12:35:14.433852  591909 pod_ready.go:103] pod "calico-kube-controllers-5745477d4d-mz446" in "kube-system" namespace has status "Ready":"False"
	I0120 12:35:16.936370  591909 pod_ready.go:93] pod "calico-kube-controllers-5745477d4d-mz446" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:16.936407  591909 pod_ready.go:82] duration metric: took 16.509299944s for pod "calico-kube-controllers-5745477d4d-mz446" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:16.936423  591909 pod_ready.go:79] waiting up to 15m0s for pod "calico-node-58f5q" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:18.944599  591909 pod_ready.go:103] pod "calico-node-58f5q" in "kube-system" namespace has status "Ready":"False"
	I0120 12:35:16.192015  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:16.192673  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
	I0120 12:35:16.192705  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:35:16.192623  593717 retry.go:31] will retry after 4.250339401s: waiting for domain to come up
	I0120 12:35:21.444844  591909 pod_ready.go:103] pod "calico-node-58f5q" in "kube-system" namespace has status "Ready":"False"
	I0120 12:35:23.961659  591909 pod_ready.go:93] pod "calico-node-58f5q" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:23.961686  591909 pod_ready.go:82] duration metric: took 7.025255499s for pod "calico-node-58f5q" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:23.961697  591909 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-qtrbt" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:23.986722  591909 pod_ready.go:93] pod "coredns-668d6bf9bc-qtrbt" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:23.986746  591909 pod_ready.go:82] duration metric: took 25.042668ms for pod "coredns-668d6bf9bc-qtrbt" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:23.986757  591909 pod_ready.go:79] waiting up to 15m0s for pod "etcd-calico-912009" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:23.996405  591909 pod_ready.go:93] pod "etcd-calico-912009" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:23.996431  591909 pod_ready.go:82] duration metric: took 9.66769ms for pod "etcd-calico-912009" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:23.996443  591909 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-calico-912009" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:24.005532  591909 pod_ready.go:93] pod "kube-apiserver-calico-912009" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:24.005568  591909 pod_ready.go:82] duration metric: took 9.117419ms for pod "kube-apiserver-calico-912009" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:24.005586  591909 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-calico-912009" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:24.014286  591909 pod_ready.go:93] pod "kube-controller-manager-calico-912009" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:24.014320  591909 pod_ready.go:82] duration metric: took 8.724239ms for pod "kube-controller-manager-calico-912009" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:24.014336  591909 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-d42xv" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:20.444937  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:20.445623  593695 main.go:141] libmachine: (custom-flannel-912009) found domain IP: 192.168.50.190
	I0120 12:35:20.445652  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has current primary IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:20.445660  593695 main.go:141] libmachine: (custom-flannel-912009) reserving static IP address...
	I0120 12:35:20.446017  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find host DHCP lease matching {name: "custom-flannel-912009", mac: "52:54:00:d9:0c:b1", ip: "192.168.50.190"} in network mk-custom-flannel-912009
	I0120 12:35:20.527289  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | Getting to WaitForSSH function...
	I0120 12:35:20.527318  593695 main.go:141] libmachine: (custom-flannel-912009) reserved static IP address 192.168.50.190 for domain custom-flannel-912009
	I0120 12:35:20.527331  593695 main.go:141] libmachine: (custom-flannel-912009) waiting for SSH...
	I0120 12:35:20.530131  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:20.530494  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009
	I0120 12:35:20.530526  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find defined IP address of network mk-custom-flannel-912009 interface with MAC address 52:54:00:d9:0c:b1
	I0120 12:35:20.530642  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | Using SSH client type: external
	I0120 12:35:20.530670  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | Using SSH private key: /home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/id_rsa (-rw-------)
	I0120 12:35:20.530724  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 12:35:20.530748  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | About to run SSH command:
	I0120 12:35:20.530761  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | exit 0
	I0120 12:35:20.534553  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | SSH cmd err, output: exit status 255: 
	I0120 12:35:20.534581  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0120 12:35:20.534592  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | command : exit 0
	I0120 12:35:20.534604  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | err     : exit status 255
	I0120 12:35:20.534639  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | output  : 
	I0120 12:35:23.534852  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | Getting to WaitForSSH function...
	I0120 12:35:23.537219  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:23.537562  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
	I0120 12:35:23.537593  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:23.537711  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | Using SSH client type: external
	I0120 12:35:23.537734  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | Using SSH private key: /home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/id_rsa (-rw-------)
	I0120 12:35:23.537766  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.190 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 12:35:23.537778  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | About to run SSH command:
	I0120 12:35:23.537786  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | exit 0
	I0120 12:35:23.666504  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | SSH cmd err, output: <nil>: 
	I0120 12:35:23.666844  593695 main.go:141] libmachine: (custom-flannel-912009) KVM machine creation complete
	I0120 12:35:23.667202  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetConfigRaw
	I0120 12:35:23.667966  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .DriverName
	I0120 12:35:23.668197  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .DriverName
	I0120 12:35:23.668360  593695 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0120 12:35:23.668377  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetState
	I0120 12:35:23.670153  593695 main.go:141] libmachine: Detecting operating system of created instance...
	I0120 12:35:23.670169  593695 main.go:141] libmachine: Waiting for SSH to be available...
	I0120 12:35:23.670175  593695 main.go:141] libmachine: Getting to WaitForSSH function...
	I0120 12:35:23.670181  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHHostname
	I0120 12:35:23.673109  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:23.673528  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
	I0120 12:35:23.673551  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:23.673837  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHPort
	I0120 12:35:23.674105  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
	I0120 12:35:23.674329  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
	I0120 12:35:23.674532  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHUsername
	I0120 12:35:23.674693  593695 main.go:141] libmachine: Using SSH client type: native
	I0120 12:35:23.674971  593695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.190 22 <nil> <nil>}
	I0120 12:35:23.674989  593695 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0120 12:35:23.781486  593695 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 12:35:23.781512  593695 main.go:141] libmachine: Detecting the provisioner...
	I0120 12:35:23.781520  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHHostname
	I0120 12:35:23.784548  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:23.785046  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
	I0120 12:35:23.785077  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:23.785303  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHPort
	I0120 12:35:23.785511  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
	I0120 12:35:23.785694  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
	I0120 12:35:23.785856  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHUsername
	I0120 12:35:23.786038  593695 main.go:141] libmachine: Using SSH client type: native
	I0120 12:35:23.786249  593695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.190 22 <nil> <nil>}
	I0120 12:35:23.786263  593695 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0120 12:35:23.895060  593695 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0120 12:35:23.895164  593695 main.go:141] libmachine: found compatible host: buildroot
	I0120 12:35:23.895185  593695 main.go:141] libmachine: Provisioning with buildroot...
	I0120 12:35:23.895198  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetMachineName
	I0120 12:35:23.895470  593695 buildroot.go:166] provisioning hostname "custom-flannel-912009"
	I0120 12:35:23.895510  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetMachineName
	I0120 12:35:23.895752  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHHostname
	I0120 12:35:23.899661  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:23.900121  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
	I0120 12:35:23.900148  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:23.900337  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHPort
	I0120 12:35:23.900565  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
	I0120 12:35:23.900738  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
	I0120 12:35:23.900892  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHUsername
	I0120 12:35:23.901167  593695 main.go:141] libmachine: Using SSH client type: native
	I0120 12:35:23.901402  593695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.190 22 <nil> <nil>}
	I0120 12:35:23.901418  593695 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-912009 && echo "custom-flannel-912009" | sudo tee /etc/hostname
	I0120 12:35:24.029708  593695 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-912009
	
	I0120 12:35:24.029744  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHHostname
	I0120 12:35:24.033017  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.033445  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
	I0120 12:35:24.033478  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.033777  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHPort
	I0120 12:35:24.034045  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
	I0120 12:35:24.034311  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
	I0120 12:35:24.034484  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHUsername
	I0120 12:35:24.034713  593695 main.go:141] libmachine: Using SSH client type: native
	I0120 12:35:24.034960  593695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.190 22 <nil> <nil>}
	I0120 12:35:24.034989  593695 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-912009' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-912009/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-912009' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 12:35:24.155682  593695 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 12:35:24.155719  593695 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20151-530330/.minikube CaCertPath:/home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20151-530330/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20151-530330/.minikube}
	I0120 12:35:24.155742  593695 buildroot.go:174] setting up certificates
	I0120 12:35:24.155752  593695 provision.go:84] configureAuth start
	I0120 12:35:24.155761  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetMachineName
	I0120 12:35:24.156072  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetIP
	I0120 12:35:24.159246  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.159526  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
	I0120 12:35:24.159559  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.159719  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHHostname
	I0120 12:35:24.162295  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.162595  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
	I0120 12:35:24.162622  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.162796  593695 provision.go:143] copyHostCerts
	I0120 12:35:24.162871  593695 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-530330/.minikube/key.pem, removing ...
	I0120 12:35:24.162897  593695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-530330/.minikube/key.pem
	I0120 12:35:24.163012  593695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20151-530330/.minikube/key.pem (1675 bytes)
	I0120 12:35:24.163166  593695 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-530330/.minikube/ca.pem, removing ...
	I0120 12:35:24.163182  593695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-530330/.minikube/ca.pem
	I0120 12:35:24.163224  593695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20151-530330/.minikube/ca.pem (1078 bytes)
	I0120 12:35:24.163301  593695 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-530330/.minikube/cert.pem, removing ...
	I0120 12:35:24.163311  593695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-530330/.minikube/cert.pem
	I0120 12:35:24.163352  593695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20151-530330/.minikube/cert.pem (1123 bytes)
	I0120 12:35:24.163530  593695 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20151-530330/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-912009 san=[127.0.0.1 192.168.50.190 custom-flannel-912009 localhost minikube]
	I0120 12:35:24.241848  593695 provision.go:177] copyRemoteCerts
	I0120 12:35:24.241916  593695 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 12:35:24.241950  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHHostname
	I0120 12:35:24.244770  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.245114  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
	I0120 12:35:24.245138  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.245331  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHPort
	I0120 12:35:24.245514  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
	I0120 12:35:24.245668  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHUsername
	I0120 12:35:24.245760  593695 sshutil.go:53] new ssh client: &{IP:192.168.50.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/id_rsa Username:docker}
	I0120 12:35:24.332818  593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0120 12:35:24.361699  593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0120 12:35:24.391399  593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0120 12:35:24.418431  593695 provision.go:87] duration metric: took 262.665168ms to configureAuth
	I0120 12:35:24.418473  593695 buildroot.go:189] setting minikube options for container-runtime
	I0120 12:35:24.418753  593695 config.go:182] Loaded profile config "custom-flannel-912009": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 12:35:24.418792  593695 main.go:141] libmachine: Checking connection to Docker...
	I0120 12:35:24.418805  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetURL
	I0120 12:35:24.420068  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | using libvirt version 6000000
	I0120 12:35:24.422715  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.423162  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
	I0120 12:35:24.423190  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.423456  593695 main.go:141] libmachine: Docker is up and running!
	I0120 12:35:24.423476  593695 main.go:141] libmachine: Reticulating splines...
	I0120 12:35:24.423486  593695 client.go:171] duration metric: took 28.999470441s to LocalClient.Create
	I0120 12:35:24.423515  593695 start.go:167] duration metric: took 28.999566096s to libmachine.API.Create "custom-flannel-912009"
	I0120 12:35:24.423528  593695 start.go:293] postStartSetup for "custom-flannel-912009" (driver="kvm2")
	I0120 12:35:24.423542  593695 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 12:35:24.423569  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .DriverName
	I0120 12:35:24.423829  593695 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 12:35:24.423855  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHHostname
	I0120 12:35:24.426268  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.426582  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
	I0120 12:35:24.426609  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.426817  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHPort
	I0120 12:35:24.427012  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
	I0120 12:35:24.427219  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHUsername
	I0120 12:35:24.427395  593695 sshutil.go:53] new ssh client: &{IP:192.168.50.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/id_rsa Username:docker}
	I0120 12:35:24.509285  593695 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 12:35:24.513984  593695 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 12:35:24.514016  593695 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-530330/.minikube/addons for local assets ...
	I0120 12:35:24.514091  593695 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-530330/.minikube/files for local assets ...
	I0120 12:35:24.514173  593695 filesync.go:149] local asset: /home/jenkins/minikube-integration/20151-530330/.minikube/files/etc/ssl/certs/5375812.pem -> 5375812.pem in /etc/ssl/certs
	I0120 12:35:24.514260  593695 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 12:35:24.523956  593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/files/etc/ssl/certs/5375812.pem --> /etc/ssl/certs/5375812.pem (1708 bytes)
	I0120 12:35:24.553908  593695 start.go:296] duration metric: took 130.36042ms for postStartSetup
	I0120 12:35:24.553975  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetConfigRaw
	I0120 12:35:24.554680  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetIP
	I0120 12:35:24.557887  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.558360  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
	I0120 12:35:24.558399  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.558632  593695 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/config.json ...
	I0120 12:35:24.558858  593695 start.go:128] duration metric: took 29.154769177s to createHost
	I0120 12:35:24.558884  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHHostname
	I0120 12:35:24.561339  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.561943  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
	I0120 12:35:24.561994  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.562136  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHPort
	I0120 12:35:24.562360  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
	I0120 12:35:24.562560  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
	I0120 12:35:24.562828  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHUsername
	I0120 12:35:24.563024  593695 main.go:141] libmachine: Using SSH client type: native
	I0120 12:35:24.563258  593695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.190 22 <nil> <nil>}
	I0120 12:35:24.563273  593695 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 12:35:24.671152  593695 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737376524.647779402
	
	I0120 12:35:24.671177  593695 fix.go:216] guest clock: 1737376524.647779402
	I0120 12:35:24.671187  593695 fix.go:229] Guest: 2025-01-20 12:35:24.647779402 +0000 UTC Remote: 2025-01-20 12:35:24.558871919 +0000 UTC m=+29.288117911 (delta=88.907483ms)
	I0120 12:35:24.671208  593695 fix.go:200] guest clock delta is within tolerance: 88.907483ms
	I0120 12:35:24.671213  593695 start.go:83] releasing machines lock for "custom-flannel-912009", held for 29.26722146s
	I0120 12:35:24.671257  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .DriverName
	I0120 12:35:24.671597  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetIP
	I0120 12:35:24.674668  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.675144  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
	I0120 12:35:24.675179  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.675303  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .DriverName
	I0120 12:35:24.675888  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .DriverName
	I0120 12:35:24.676102  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .DriverName
	I0120 12:35:24.676270  593695 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 12:35:24.676339  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHHostname
	I0120 12:35:24.676389  593695 ssh_runner.go:195] Run: cat /version.json
	I0120 12:35:24.676418  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHHostname
	I0120 12:35:24.679423  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.679453  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.679849  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
	I0120 12:35:24.679890  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
	I0120 12:35:24.679912  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.679941  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.680114  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHPort
	I0120 12:35:24.680284  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHPort
	I0120 12:35:24.680292  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
	I0120 12:35:24.680454  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHUsername
	I0120 12:35:24.680472  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
	I0120 12:35:24.680601  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHUsername
	I0120 12:35:24.680657  593695 sshutil.go:53] new ssh client: &{IP:192.168.50.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/id_rsa Username:docker}
	I0120 12:35:24.680719  593695 sshutil.go:53] new ssh client: &{IP:192.168.50.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/id_rsa Username:docker}
	I0120 12:35:24.767818  593695 ssh_runner.go:195] Run: systemctl --version
	I0120 12:35:24.795757  593695 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 12:35:24.801932  593695 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 12:35:24.802005  593695 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 12:35:24.822047  593695 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 12:35:24.822074  593695 start.go:495] detecting cgroup driver to use...
	I0120 12:35:24.822147  593695 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0120 12:35:24.853585  593695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0120 12:35:24.869225  593695 docker.go:217] disabling cri-docker service (if available) ...
	I0120 12:35:24.869302  593695 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 12:35:24.883816  593695 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 12:35:24.897972  593695 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 12:35:25.028005  593695 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 12:35:25.171259  593695 docker.go:233] disabling docker service ...
	I0120 12:35:25.171345  593695 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 12:35:25.187813  593695 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 12:35:25.201348  593695 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 12:35:24.343295  591909 pod_ready.go:93] pod "kube-proxy-d42xv" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:24.343328  591909 pod_ready.go:82] duration metric: took 328.982488ms for pod "kube-proxy-d42xv" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:24.343343  591909 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-calico-912009" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:24.741158  591909 pod_ready.go:93] pod "kube-scheduler-calico-912009" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:24.741188  591909 pod_ready.go:82] duration metric: took 397.835554ms for pod "kube-scheduler-calico-912009" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:24.741204  591909 pod_ready.go:39] duration metric: took 24.323905541s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:35:24.741225  591909 api_server.go:52] waiting for apiserver process to appear ...
	I0120 12:35:24.741287  591909 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:24.758948  591909 api_server.go:72] duration metric: took 33.170230566s to wait for apiserver process to appear ...
	I0120 12:35:24.758984  591909 api_server.go:88] waiting for apiserver healthz status ...
	I0120 12:35:24.759013  591909 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8443/healthz ...
	I0120 12:35:24.763591  591909 api_server.go:279] https://192.168.61.244:8443/healthz returned 200:
	ok
	I0120 12:35:24.764729  591909 api_server.go:141] control plane version: v1.32.0
	I0120 12:35:24.764761  591909 api_server.go:131] duration metric: took 5.768981ms to wait for apiserver health ...
	I0120 12:35:24.764772  591909 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 12:35:24.947474  591909 system_pods.go:59] 9 kube-system pods found
	I0120 12:35:24.947535  591909 system_pods.go:61] "calico-kube-controllers-5745477d4d-mz446" [84466c15-f6c8-4e5e-9e75-a9f5712ec8e6] Running
	I0120 12:35:24.947545  591909 system_pods.go:61] "calico-node-58f5q" [4c659cf9-7e8b-4f9e-a251-005a41562c7c] Running
	I0120 12:35:24.947551  591909 system_pods.go:61] "coredns-668d6bf9bc-qtrbt" [2bf73e76-3e51-4775-931e-49299625214f] Running
	I0120 12:35:24.947555  591909 system_pods.go:61] "etcd-calico-912009" [39631069-4624-4ede-8433-ccc68d866eaa] Running
	I0120 12:35:24.947560  591909 system_pods.go:61] "kube-apiserver-calico-912009" [50d0f21d-f92e-4c26-8dfc-e37ed39827cb] Running
	I0120 12:35:24.947565  591909 system_pods.go:61] "kube-controller-manager-calico-912009" [1f3aef6d-59c0-4413-aa4e-6e23c8881f78] Running
	I0120 12:35:24.947570  591909 system_pods.go:61] "kube-proxy-d42xv" [3d24c7d5-50b1-4871-bc05-74fd339a3e0b] Running
	I0120 12:35:24.947574  591909 system_pods.go:61] "kube-scheduler-calico-912009" [927218e7-10b5-472b-accc-e139302981f3] Running
	I0120 12:35:24.947579  591909 system_pods.go:61] "storage-provisioner" [2124f06a-3841-4d00-85f3-6c7001d3d30d] Running
	I0120 12:35:24.947587  591909 system_pods.go:74] duration metric: took 182.808552ms to wait for pod list to return data ...
	I0120 12:35:24.947598  591909 default_sa.go:34] waiting for default service account to be created ...
	I0120 12:35:25.141030  591909 default_sa.go:45] found service account: "default"
	I0120 12:35:25.141064  591909 default_sa.go:55] duration metric: took 193.459842ms for default service account to be created ...
	I0120 12:35:25.141074  591909 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 12:35:25.345280  591909 system_pods.go:87] 9 kube-system pods found
	I0120 12:35:25.541923  591909 system_pods.go:105] "calico-kube-controllers-5745477d4d-mz446" [84466c15-f6c8-4e5e-9e75-a9f5712ec8e6] Running
	I0120 12:35:25.541949  591909 system_pods.go:105] "calico-node-58f5q" [4c659cf9-7e8b-4f9e-a251-005a41562c7c] Running
	I0120 12:35:25.541955  591909 system_pods.go:105] "coredns-668d6bf9bc-qtrbt" [2bf73e76-3e51-4775-931e-49299625214f] Running
	I0120 12:35:25.541960  591909 system_pods.go:105] "etcd-calico-912009" [39631069-4624-4ede-8433-ccc68d866eaa] Running
	I0120 12:35:25.541965  591909 system_pods.go:105] "kube-apiserver-calico-912009" [50d0f21d-f92e-4c26-8dfc-e37ed39827cb] Running
	I0120 12:35:25.541969  591909 system_pods.go:105] "kube-controller-manager-calico-912009" [1f3aef6d-59c0-4413-aa4e-6e23c8881f78] Running
	I0120 12:35:25.541974  591909 system_pods.go:105] "kube-proxy-d42xv" [3d24c7d5-50b1-4871-bc05-74fd339a3e0b] Running
	I0120 12:35:25.541981  591909 system_pods.go:105] "kube-scheduler-calico-912009" [927218e7-10b5-472b-accc-e139302981f3] Running
	I0120 12:35:25.541993  591909 system_pods.go:105] "storage-provisioner" [2124f06a-3841-4d00-85f3-6c7001d3d30d] Running
	I0120 12:35:25.542005  591909 system_pods.go:147] duration metric: took 400.9237ms to wait for k8s-apps to be running ...
	I0120 12:35:25.542022  591909 system_svc.go:44] waiting for kubelet service to be running ....
	I0120 12:35:25.542076  591909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 12:35:25.559267  591909 system_svc.go:56] duration metric: took 17.236172ms WaitForService to wait for kubelet
	I0120 12:35:25.559301  591909 kubeadm.go:582] duration metric: took 33.970593024s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 12:35:25.559343  591909 node_conditions.go:102] verifying NodePressure condition ...
	I0120 12:35:25.741320  591909 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 12:35:25.741363  591909 node_conditions.go:123] node cpu capacity is 2
	I0120 12:35:25.741379  591909 node_conditions.go:105] duration metric: took 182.030441ms to run NodePressure ...
	I0120 12:35:25.741395  591909 start.go:241] waiting for startup goroutines ...
	I0120 12:35:25.741405  591909 start.go:246] waiting for cluster config update ...
	I0120 12:35:25.741426  591909 start.go:255] writing updated cluster config ...
	I0120 12:35:25.798226  591909 ssh_runner.go:195] Run: rm -f paused
	I0120 12:35:25.864008  591909 start.go:600] kubectl: 1.32.1, cluster: 1.32.0 (minor skew: 0)
	I0120 12:35:25.935661  591909 out.go:177] * Done! kubectl is now configured to use "calico-912009" cluster and "default" namespace by default
	I0120 12:35:25.355950  593695 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 12:35:25.488046  593695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 12:35:25.503617  593695 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 12:35:25.524909  593695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0120 12:35:25.535904  593695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0120 12:35:25.548267  593695 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0120 12:35:25.548339  593695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0120 12:35:25.559155  593695 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0120 12:35:25.569907  593695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0120 12:35:25.581371  593695 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0120 12:35:25.593457  593695 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 12:35:25.605028  593695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0120 12:35:25.617300  593695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0120 12:35:25.629598  593695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0120 12:35:25.641451  593695 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 12:35:25.653746  593695 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 12:35:25.653896  593695 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 12:35:25.669029  593695 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 12:35:25.682069  593695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:35:25.826095  593695 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0120 12:35:25.865783  593695 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0120 12:35:25.865871  593695 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0120 12:35:25.871185  593695 retry.go:31] will retry after 1.23432325s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0120 12:35:27.105977  593695 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0120 12:35:27.111951  593695 start.go:563] Will wait 60s for crictl version
	I0120 12:35:27.112034  593695 ssh_runner.go:195] Run: which crictl
	I0120 12:35:27.116737  593695 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 12:35:27.161217  593695 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0120 12:35:27.161291  593695 ssh_runner.go:195] Run: containerd --version
	I0120 12:35:27.190230  593695 ssh_runner.go:195] Run: containerd --version
	I0120 12:35:27.219481  593695 out.go:177] * Preparing Kubernetes v1.32.0 on containerd 1.7.23 ...
	I0120 12:35:27.220968  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetIP
	I0120 12:35:27.223799  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:27.224137  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
	I0120 12:35:27.224161  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:27.224394  593695 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0120 12:35:27.228599  593695 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 12:35:27.242027  593695 kubeadm.go:883] updating cluster {Name:custom-flannel-912009 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:custom-flannel-912009 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.50.190 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 12:35:27.242166  593695 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
	I0120 12:35:27.242266  593695 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:35:27.280733  593695 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0120 12:35:27.280808  593695 ssh_runner.go:195] Run: which lz4
	I0120 12:35:27.285414  593695 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 12:35:27.290608  593695 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 12:35:27.290637  593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (398081533 bytes)
	I0120 12:35:28.842033  593695 containerd.go:563] duration metric: took 1.556664096s to copy over tarball
	I0120 12:35:28.842105  593695 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 12:35:31.289395  593695 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.44725613s)
	I0120 12:35:31.289429  593695 containerd.go:570] duration metric: took 2.44736643s to extract the tarball
	I0120 12:35:31.289440  593695 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 12:35:31.333681  593695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:35:31.450015  593695 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0120 12:35:31.481159  593695 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:35:31.540445  593695 retry.go:31] will retry after 180.029348ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-20T12:35:31Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0120 12:35:31.720933  593695 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:35:31.764494  593695 containerd.go:627] all images are preloaded for containerd runtime.
	I0120 12:35:31.764524  593695 cache_images.go:84] Images are preloaded, skipping loading
	I0120 12:35:31.764532  593695 kubeadm.go:934] updating node { 192.168.50.190 8443 v1.32.0 containerd true true} ...
	I0120 12:35:31.764644  593695 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=custom-flannel-912009 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.190
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:custom-flannel-912009 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I0120 12:35:31.764699  593695 ssh_runner.go:195] Run: sudo crictl info
	I0120 12:35:31.801010  593695 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0120 12:35:31.801048  593695 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 12:35:31.801070  593695 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.190 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-912009 NodeName:custom-flannel-912009 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.190"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.190 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 12:35:31.801206  593695 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.190
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "custom-flannel-912009"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.190"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.190"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 12:35:31.801295  593695 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 12:35:31.812630  593695 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 12:35:31.812728  593695 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 12:35:31.823817  593695 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0120 12:35:31.842930  593695 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 12:35:31.861044  593695 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2317 bytes)
	I0120 12:35:31.880051  593695 ssh_runner.go:195] Run: grep 192.168.50.190	control-plane.minikube.internal$ /etc/hosts
	I0120 12:35:31.884576  593695 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.190	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 12:35:31.898346  593695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:35:32.028778  593695 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:35:32.052796  593695 certs.go:68] Setting up /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009 for IP: 192.168.50.190
	I0120 12:35:32.052827  593695 certs.go:194] generating shared ca certs ...
	I0120 12:35:32.052845  593695 certs.go:226] acquiring lock for ca certs: {Name:mk52c62007c989bdf47cf8ee68bb49e4d4d8996b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:35:32.053075  593695 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20151-530330/.minikube/ca.key
	I0120 12:35:32.053147  593695 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20151-530330/.minikube/proxy-client-ca.key
	I0120 12:35:32.053163  593695 certs.go:256] generating profile certs ...
	I0120 12:35:32.053247  593695 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/client.key
	I0120 12:35:32.053279  593695 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/client.crt with IP's: []
	I0120 12:35:32.452867  593695 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/client.crt ...
	I0120 12:35:32.452901  593695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/client.crt: {Name:mk835ad9719695d1ab06cc7c134d449ff4a8ec37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:35:32.453073  593695 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/client.key ...
	I0120 12:35:32.453086  593695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/client.key: {Name:mk5dcd2ed981e6e4fa3ffc179551607c1e7c7c35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:35:32.460567  593695 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/apiserver.key.77137fdc
	I0120 12:35:32.460603  593695 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/apiserver.crt.77137fdc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.190]
	I0120 12:35:32.709471  593695 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/apiserver.crt.77137fdc ...
	I0120 12:35:32.709507  593695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/apiserver.crt.77137fdc: {Name:mkecfe0edd1856a9b879cb97ff718bab280ced2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:35:32.709699  593695 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/apiserver.key.77137fdc ...
	I0120 12:35:32.709716  593695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/apiserver.key.77137fdc: {Name:mk6d882a97424f5468af12647844aaa949a2932d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:35:32.709838  593695 certs.go:381] copying /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/apiserver.crt.77137fdc -> /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/apiserver.crt
	I0120 12:35:32.709950  593695 certs.go:385] copying /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/apiserver.key.77137fdc -> /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/apiserver.key
	I0120 12:35:32.710022  593695 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/proxy-client.key
	I0120 12:35:32.710036  593695 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/proxy-client.crt with IP's: []
	I0120 12:35:33.008294  593695 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/proxy-client.crt ...
	I0120 12:35:33.008328  593695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/proxy-client.crt: {Name:mk49acca2ab8ab3a30e85bb0e3b8b16095040d06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:35:33.008501  593695 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/proxy-client.key ...
	I0120 12:35:33.008514  593695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/proxy-client.key: {Name:mkc4e59c474ddf1c18711f46c3fda8af2d43d2f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:35:33.008678  593695 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/537581.pem (1338 bytes)
	W0120 12:35:33.008717  593695 certs.go:480] ignoring /home/jenkins/minikube-integration/20151-530330/.minikube/certs/537581_empty.pem, impossibly tiny 0 bytes
	I0120 12:35:33.008726  593695 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca-key.pem (1679 bytes)
	I0120 12:35:33.008747  593695 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca.pem (1078 bytes)
	I0120 12:35:33.008801  593695 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/cert.pem (1123 bytes)
	I0120 12:35:33.008830  593695 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/key.pem (1675 bytes)
	I0120 12:35:33.008869  593695 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-530330/.minikube/files/etc/ssl/certs/5375812.pem (1708 bytes)
	I0120 12:35:33.009450  593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 12:35:33.037734  593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0120 12:35:33.078488  593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 12:35:33.105293  593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0120 12:35:33.130922  593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0120 12:35:33.156034  593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0120 12:35:33.181145  593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 12:35:33.209991  593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0120 12:35:33.236891  593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 12:35:33.263012  593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/certs/537581.pem --> /usr/share/ca-certificates/537581.pem (1338 bytes)
	I0120 12:35:33.291892  593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/files/etc/ssl/certs/5375812.pem --> /usr/share/ca-certificates/5375812.pem (1708 bytes)
	I0120 12:35:33.320316  593695 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 12:35:33.339826  593695 ssh_runner.go:195] Run: openssl version
	I0120 12:35:33.346196  593695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 12:35:33.360216  593695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:35:33.365369  593695 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 11:16 /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:35:33.365457  593695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:35:33.371913  593695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 12:35:33.384511  593695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537581.pem && ln -fs /usr/share/ca-certificates/537581.pem /etc/ssl/certs/537581.pem"
	I0120 12:35:33.396943  593695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537581.pem
	I0120 12:35:33.402006  593695 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 11:24 /usr/share/ca-certificates/537581.pem
	I0120 12:35:33.402094  593695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537581.pem
	I0120 12:35:33.408421  593695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537581.pem /etc/ssl/certs/51391683.0"
	I0120 12:35:33.422913  593695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5375812.pem && ln -fs /usr/share/ca-certificates/5375812.pem /etc/ssl/certs/5375812.pem"
	I0120 12:35:33.446953  593695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5375812.pem
	I0120 12:35:33.460154  593695 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 11:24 /usr/share/ca-certificates/5375812.pem
	I0120 12:35:33.460243  593695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5375812.pem
	I0120 12:35:33.473049  593695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5375812.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 12:35:33.494370  593695 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 12:35:33.499833  593695 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0120 12:35:33.499899  593695 kubeadm.go:392] StartCluster: {Name:custom-flannel-912009 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:custom-flannel-912009 Namespace:def
ault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.50.190 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:35:33.500002  593695 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0120 12:35:33.500097  593695 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 12:35:33.554921  593695 cri.go:89] found id: ""
	I0120 12:35:33.555004  593695 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 12:35:33.567155  593695 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 12:35:33.579445  593695 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:35:33.597705  593695 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:35:33.597735  593695 kubeadm.go:157] found existing configuration files:
	
	I0120 12:35:33.597796  593695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 12:35:33.610082  593695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:35:33.610143  593695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:35:33.620572  593695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 12:35:33.630336  593695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:35:33.630477  593695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:35:33.642367  593695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 12:35:33.654203  593695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:35:33.654285  593695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:35:33.666300  593695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 12:35:33.678958  593695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:35:33.679034  593695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:35:33.690383  593695 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 12:35:33.751799  593695 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 12:35:33.751856  593695 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 12:35:33.868316  593695 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 12:35:33.868495  593695 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 12:35:33.868635  593695 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 12:35:33.878015  593695 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 12:35:33.880879  593695 out.go:235]   - Generating certificates and keys ...
	I0120 12:35:33.880991  593695 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 12:35:33.881075  593695 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 12:35:34.118211  593695 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0120 12:35:34.268264  593695 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0120 12:35:34.395094  593695 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0120 12:35:34.615258  593695 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0120 12:35:34.840828  593695 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0120 12:35:34.841049  593695 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-912009 localhost] and IPs [192.168.50.190 127.0.0.1 ::1]
	I0120 12:35:34.980318  593695 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0120 12:35:34.980559  593695 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-912009 localhost] and IPs [192.168.50.190 127.0.0.1 ::1]
	I0120 12:35:35.340147  593695 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0120 12:35:35.661731  593695 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0120 12:35:35.819536  593695 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0120 12:35:35.819789  593695 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 12:35:36.025686  593695 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 12:35:36.151576  593695 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 12:35:36.213677  593695 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 12:35:36.370255  593695 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 12:35:36.699839  593695 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 12:35:36.702474  593695 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 12:35:36.706508  593695 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 12:35:36.708260  593695 out.go:235]   - Booting up control plane ...
	I0120 12:35:36.708404  593695 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 12:35:36.708515  593695 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 12:35:36.708618  593695 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 12:35:36.727916  593695 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 12:35:36.734985  593695 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 12:35:36.735050  593695 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 12:35:36.891554  593695 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 12:35:36.891696  593695 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 12:35:37.892390  593695 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001463848s
	I0120 12:35:37.892535  593695 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 12:35:42.892060  593695 kubeadm.go:310] [api-check] The API server is healthy after 5.002045649s
	I0120 12:35:42.907359  593695 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 12:35:42.923769  593695 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 12:35:42.947405  593695 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 12:35:42.947611  593695 kubeadm.go:310] [mark-control-plane] Marking the node custom-flannel-912009 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 12:35:42.957385  593695 kubeadm.go:310] [bootstrap-token] Using token: pwfscc.y1n10nfegb7ld7mi
	I0120 12:35:42.958829  593695 out.go:235]   - Configuring RBAC rules ...
	I0120 12:35:42.958983  593695 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 12:35:42.963002  593695 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 12:35:42.972421  593695 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 12:35:42.976005  593695 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 12:35:42.981865  593695 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 12:35:42.985056  593695 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 12:35:43.299543  593695 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 12:35:43.743871  593695 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 12:35:44.299948  593695 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 12:35:44.304043  593695 kubeadm.go:310] 
	I0120 12:35:44.304135  593695 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 12:35:44.304148  593695 kubeadm.go:310] 
	I0120 12:35:44.304271  593695 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 12:35:44.304306  593695 kubeadm.go:310] 
	I0120 12:35:44.304374  593695 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 12:35:44.304467  593695 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 12:35:44.304538  593695 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 12:35:44.304551  593695 kubeadm.go:310] 
	I0120 12:35:44.304616  593695 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 12:35:44.304627  593695 kubeadm.go:310] 
	I0120 12:35:44.304689  593695 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 12:35:44.304699  593695 kubeadm.go:310] 
	I0120 12:35:44.304767  593695 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 12:35:44.304884  593695 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 12:35:44.304988  593695 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 12:35:44.305012  593695 kubeadm.go:310] 
	I0120 12:35:44.305132  593695 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 12:35:44.305245  593695 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 12:35:44.305260  593695 kubeadm.go:310] 
	I0120 12:35:44.305359  593695 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pwfscc.y1n10nfegb7ld7mi \
	I0120 12:35:44.305494  593695 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:326640d5f51fa6eddf3fd6f2b38f5a08d4608620129e8898d45359839be856c3 \
	I0120 12:35:44.305524  593695 kubeadm.go:310] 	--control-plane 
	I0120 12:35:44.305529  593695 kubeadm.go:310] 
	I0120 12:35:44.305630  593695 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 12:35:44.305636  593695 kubeadm.go:310] 
	I0120 12:35:44.305725  593695 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pwfscc.y1n10nfegb7ld7mi \
	I0120 12:35:44.305865  593695 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:326640d5f51fa6eddf3fd6f2b38f5a08d4608620129e8898d45359839be856c3 
	I0120 12:35:44.309010  593695 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 12:35:44.309072  593695 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0120 12:35:44.311925  593695 out.go:177] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I0120 12:35:44.313463  593695 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.0/kubectl ...
	I0120 12:35:44.313529  593695 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I0120 12:35:44.319726  593695 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I0120 12:35:44.319758  593695 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I0120 12:35:44.351216  593695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0120 12:35:44.868640  593695 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 12:35:44.868740  593695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:44.868782  593695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-912009 minikube.k8s.io/updated_at=2025_01_20T12_35_44_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=77d80cf1517f5f1439721b28711982314b21bec9 minikube.k8s.io/name=custom-flannel-912009 minikube.k8s.io/primary=true
	I0120 12:35:45.116669  593695 ops.go:34] apiserver oom_adj: -16
	I0120 12:35:45.116816  593695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:45.617431  593695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:46.117712  593695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:46.616896  593695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:47.117662  593695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:47.617183  593695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:48.116968  593695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:48.616887  593695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:48.749904  593695 kubeadm.go:1113] duration metric: took 3.881252521s to wait for elevateKubeSystemPrivileges
	I0120 12:35:48.749953  593695 kubeadm.go:394] duration metric: took 15.250058721s to StartCluster
	I0120 12:35:48.749980  593695 settings.go:142] acquiring lock: {Name:mkbafde306c71e7b8958e2377ddfa5a9e3a59113 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:35:48.750089  593695 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20151-530330/kubeconfig
	I0120 12:35:48.752036  593695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-530330/kubeconfig: {Name:mk113e13541afa8413ea8a359169b0824f5f9ef4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:35:48.752297  593695 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0120 12:35:48.752305  593695 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.190 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0120 12:35:48.752376  593695 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 12:35:48.752503  593695 addons.go:69] Setting storage-provisioner=true in profile "custom-flannel-912009"
	I0120 12:35:48.752529  593695 addons.go:238] Setting addon storage-provisioner=true in "custom-flannel-912009"
	I0120 12:35:48.752553  593695 config.go:182] Loaded profile config "custom-flannel-912009": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 12:35:48.752573  593695 host.go:66] Checking if "custom-flannel-912009" exists ...
	I0120 12:35:48.752614  593695 addons.go:69] Setting default-storageclass=true in profile "custom-flannel-912009"
	I0120 12:35:48.752635  593695 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-912009"
	I0120 12:35:48.753033  593695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:35:48.753071  593695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:48.753077  593695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:35:48.753115  593695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:48.754038  593695 out.go:177] * Verifying Kubernetes components...
	I0120 12:35:48.755543  593695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:35:48.770900  593695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46513
	I0120 12:35:48.770924  593695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46759
	I0120 12:35:48.771512  593695 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:48.771523  593695 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:48.771980  593695 main.go:141] libmachine: Using API Version  1
	I0120 12:35:48.771999  593695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:48.772120  593695 main.go:141] libmachine: Using API Version  1
	I0120 12:35:48.772167  593695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:48.772407  593695 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:48.772581  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetState
	I0120 12:35:48.772694  593695 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:48.773172  593695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:35:48.773221  593695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:48.775953  593695 addons.go:238] Setting addon default-storageclass=true in "custom-flannel-912009"
	I0120 12:35:48.775985  593695 host.go:66] Checking if "custom-flannel-912009" exists ...
	I0120 12:35:48.776217  593695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:35:48.776242  593695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:48.791662  593695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42903
	I0120 12:35:48.791918  593695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46145
	I0120 12:35:48.792260  593695 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:48.792600  593695 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:48.792770  593695 main.go:141] libmachine: Using API Version  1
	I0120 12:35:48.792789  593695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:48.793183  593695 main.go:141] libmachine: Using API Version  1
	I0120 12:35:48.793202  593695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:48.793265  593695 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:48.793756  593695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:35:48.793790  593695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:48.793902  593695 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:48.794308  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetState
	I0120 12:35:48.796179  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .DriverName
	I0120 12:35:48.798629  593695 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:35:48.800337  593695 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:35:48.800353  593695 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 12:35:48.800370  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHHostname
	I0120 12:35:48.803462  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:48.803925  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
	I0120 12:35:48.803956  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:48.804206  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHPort
	I0120 12:35:48.804403  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
	I0120 12:35:48.804565  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHUsername
	I0120 12:35:48.804707  593695 sshutil.go:53] new ssh client: &{IP:192.168.50.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/id_rsa Username:docker}
	I0120 12:35:48.811596  593695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39077
	I0120 12:35:48.811951  593695 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:48.812485  593695 main.go:141] libmachine: Using API Version  1
	I0120 12:35:48.812512  593695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:48.812866  593695 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:48.813065  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetState
	I0120 12:35:48.814819  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .DriverName
	I0120 12:35:48.814988  593695 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 12:35:48.814999  593695 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 12:35:48.815012  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHHostname
	I0120 12:35:48.817477  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:48.817881  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
	I0120 12:35:48.817910  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:48.818198  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHPort
	I0120 12:35:48.818380  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
	I0120 12:35:48.818527  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHUsername
	I0120 12:35:48.818657  593695 sshutil.go:53] new ssh client: &{IP:192.168.50.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/id_rsa Username:docker}
	I0120 12:35:49.140129  593695 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:35:49.140225  593695 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0120 12:35:49.271376  593695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:35:49.277298  593695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 12:35:49.757630  593695 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0120 12:35:49.759580  593695 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-912009" to be "Ready" ...
	I0120 12:35:50.126202  593695 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:50.126240  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .Close
	I0120 12:35:50.126243  593695 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:50.126267  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .Close
	I0120 12:35:50.126553  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | Closing plugin on server side
	I0120 12:35:50.126589  593695 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:50.126596  593695 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:50.126602  593695 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:50.126608  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .Close
	I0120 12:35:50.126719  593695 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:50.126731  593695 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:50.126764  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | Closing plugin on server side
	I0120 12:35:50.126851  593695 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:50.126869  593695 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:50.126891  593695 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:50.126902  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .Close
	I0120 12:35:50.127111  593695 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:50.127122  593695 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:50.137124  593695 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:50.137145  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .Close
	I0120 12:35:50.137540  593695 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:50.137572  593695 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:50.139205  593695 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0120 12:35:50.140687  593695 addons.go:514] duration metric: took 1.388318596s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0120 12:35:50.263249  593695 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-912009" context rescaled to 1 replicas
	I0120 12:35:51.764008  593695 node_ready.go:53] node "custom-flannel-912009" has status "Ready":"False"
	I0120 12:35:53.764278  593695 node_ready.go:53] node "custom-flannel-912009" has status "Ready":"False"
	I0120 12:35:56.267054  593695 node_ready.go:53] node "custom-flannel-912009" has status "Ready":"False"
	I0120 12:35:56.762993  593695 node_ready.go:49] node "custom-flannel-912009" has status "Ready":"True"
	I0120 12:35:56.763021  593695 node_ready.go:38] duration metric: took 7.003409226s for node "custom-flannel-912009" to be "Ready" ...
	I0120 12:35:56.763031  593695 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:35:56.774021  593695 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-zcgzt" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:58.781485  593695 pod_ready.go:103] pod "coredns-668d6bf9bc-zcgzt" in "kube-system" namespace has status "Ready":"False"
	I0120 12:36:01.281717  593695 pod_ready.go:103] pod "coredns-668d6bf9bc-zcgzt" in "kube-system" namespace has status "Ready":"False"
	I0120 12:36:03.281973  593695 pod_ready.go:103] pod "coredns-668d6bf9bc-zcgzt" in "kube-system" namespace has status "Ready":"False"
	I0120 12:36:05.779798  593695 pod_ready.go:103] pod "coredns-668d6bf9bc-zcgzt" in "kube-system" namespace has status "Ready":"False"
	I0120 12:36:07.781018  593695 pod_ready.go:103] pod "coredns-668d6bf9bc-zcgzt" in "kube-system" namespace has status "Ready":"False"
	I0120 12:36:09.781624  593695 pod_ready.go:103] pod "coredns-668d6bf9bc-zcgzt" in "kube-system" namespace has status "Ready":"False"
	I0120 12:36:12.283171  593695 pod_ready.go:93] pod "coredns-668d6bf9bc-zcgzt" in "kube-system" namespace has status "Ready":"True"
	I0120 12:36:12.283202  593695 pod_ready.go:82] duration metric: took 15.509154098s for pod "coredns-668d6bf9bc-zcgzt" in "kube-system" namespace to be "Ready" ...
	I0120 12:36:12.283215  593695 pod_ready.go:79] waiting up to 15m0s for pod "etcd-custom-flannel-912009" in "kube-system" namespace to be "Ready" ...
	I0120 12:36:12.288965  593695 pod_ready.go:93] pod "etcd-custom-flannel-912009" in "kube-system" namespace has status "Ready":"True"
	I0120 12:36:12.288990  593695 pod_ready.go:82] duration metric: took 5.767908ms for pod "etcd-custom-flannel-912009" in "kube-system" namespace to be "Ready" ...
	I0120 12:36:12.289000  593695 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-custom-flannel-912009" in "kube-system" namespace to be "Ready" ...
	I0120 12:36:12.293688  593695 pod_ready.go:93] pod "kube-apiserver-custom-flannel-912009" in "kube-system" namespace has status "Ready":"True"
	I0120 12:36:12.293716  593695 pod_ready.go:82] duration metric: took 4.708111ms for pod "kube-apiserver-custom-flannel-912009" in "kube-system" namespace to be "Ready" ...
	I0120 12:36:12.293729  593695 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-custom-flannel-912009" in "kube-system" namespace to be "Ready" ...
	I0120 12:36:12.297788  593695 pod_ready.go:93] pod "kube-controller-manager-custom-flannel-912009" in "kube-system" namespace has status "Ready":"True"
	I0120 12:36:12.297826  593695 pod_ready.go:82] duration metric: took 4.088036ms for pod "kube-controller-manager-custom-flannel-912009" in "kube-system" namespace to be "Ready" ...
	I0120 12:36:12.297840  593695 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-v6hzk" in "kube-system" namespace to be "Ready" ...
	I0120 12:36:12.301911  593695 pod_ready.go:93] pod "kube-proxy-v6hzk" in "kube-system" namespace has status "Ready":"True"
	I0120 12:36:12.301932  593695 pod_ready.go:82] duration metric: took 4.084396ms for pod "kube-proxy-v6hzk" in "kube-system" namespace to be "Ready" ...
	I0120 12:36:12.301941  593695 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-custom-flannel-912009" in "kube-system" namespace to be "Ready" ...
	I0120 12:36:12.678978  593695 pod_ready.go:93] pod "kube-scheduler-custom-flannel-912009" in "kube-system" namespace has status "Ready":"True"
	I0120 12:36:12.679012  593695 pod_ready.go:82] duration metric: took 377.062726ms for pod "kube-scheduler-custom-flannel-912009" in "kube-system" namespace to be "Ready" ...
	I0120 12:36:12.679029  593695 pod_ready.go:39] duration metric: took 15.915986454s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:36:12.679050  593695 api_server.go:52] waiting for apiserver process to appear ...
	I0120 12:36:12.679114  593695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:36:12.695820  593695 api_server.go:72] duration metric: took 23.943481333s to wait for apiserver process to appear ...
	I0120 12:36:12.695857  593695 api_server.go:88] waiting for apiserver healthz status ...
	I0120 12:36:12.695891  593695 api_server.go:253] Checking apiserver healthz at https://192.168.50.190:8443/healthz ...
	I0120 12:36:12.700809  593695 api_server.go:279] https://192.168.50.190:8443/healthz returned 200:
	ok
	I0120 12:36:12.701918  593695 api_server.go:141] control plane version: v1.32.0
	I0120 12:36:12.701948  593695 api_server.go:131] duration metric: took 6.082216ms to wait for apiserver health ...
	I0120 12:36:12.701958  593695 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 12:36:12.882081  593695 system_pods.go:59] 7 kube-system pods found
	I0120 12:36:12.882124  593695 system_pods.go:61] "coredns-668d6bf9bc-zcgzt" [a4599587-8acf-43f9-a149-178f1cc35aa0] Running
	I0120 12:36:12.882133  593695 system_pods.go:61] "etcd-custom-flannel-912009" [6fb49a98-624e-43ed-850a-8a9c63dd40fc] Running
	I0120 12:36:12.882140  593695 system_pods.go:61] "kube-apiserver-custom-flannel-912009" [4341c7c9-5d0f-4740-a7af-971594286c38] Running
	I0120 12:36:12.882146  593695 system_pods.go:61] "kube-controller-manager-custom-flannel-912009" [0db8a018-592b-4019-a02b-b3565937d695] Running
	I0120 12:36:12.882152  593695 system_pods.go:61] "kube-proxy-v6hzk" [e2019ab7-b2fc-48ac-86d2-c014ff8e07c8] Running
	I0120 12:36:12.882157  593695 system_pods.go:61] "kube-scheduler-custom-flannel-912009" [f739f365-2d5e-45ee-90d9-6e67ba46401a] Running
	I0120 12:36:12.882163  593695 system_pods.go:61] "storage-provisioner" [0f702c35-7c57-44be-aa95-58d0e3c4a56a] Running
	I0120 12:36:12.882171  593695 system_pods.go:74] duration metric: took 180.205562ms to wait for pod list to return data ...
	I0120 12:36:12.882184  593695 default_sa.go:34] waiting for default service account to be created ...
	I0120 12:36:13.078402  593695 default_sa.go:45] found service account: "default"
	I0120 12:36:13.078437  593695 default_sa.go:55] duration metric: took 196.244937ms for default service account to be created ...
	I0120 12:36:13.078449  593695 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 12:36:13.281225  593695 system_pods.go:87] 7 kube-system pods found
	I0120 12:36:13.479438  593695 system_pods.go:105] "coredns-668d6bf9bc-zcgzt" [a4599587-8acf-43f9-a149-178f1cc35aa0] Running
	I0120 12:36:13.479469  593695 system_pods.go:105] "etcd-custom-flannel-912009" [6fb49a98-624e-43ed-850a-8a9c63dd40fc] Running
	I0120 12:36:13.479478  593695 system_pods.go:105] "kube-apiserver-custom-flannel-912009" [4341c7c9-5d0f-4740-a7af-971594286c38] Running
	I0120 12:36:13.479485  593695 system_pods.go:105] "kube-controller-manager-custom-flannel-912009" [0db8a018-592b-4019-a02b-b3565937d695] Running
	I0120 12:36:13.479491  593695 system_pods.go:105] "kube-proxy-v6hzk" [e2019ab7-b2fc-48ac-86d2-c014ff8e07c8] Running
	I0120 12:36:13.479496  593695 system_pods.go:105] "kube-scheduler-custom-flannel-912009" [f739f365-2d5e-45ee-90d9-6e67ba46401a] Running
	I0120 12:36:13.479501  593695 system_pods.go:105] "storage-provisioner" [0f702c35-7c57-44be-aa95-58d0e3c4a56a] Running
	I0120 12:36:13.479511  593695 system_pods.go:147] duration metric: took 401.053197ms to wait for k8s-apps to be running ...
	I0120 12:36:13.479520  593695 system_svc.go:44] waiting for kubelet service to be running ....
	I0120 12:36:13.479592  593695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 12:36:13.495091  593695 system_svc.go:56] duration metric: took 15.558739ms WaitForService to wait for kubelet
	I0120 12:36:13.495133  593695 kubeadm.go:582] duration metric: took 24.742796954s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 12:36:13.495185  593695 node_conditions.go:102] verifying NodePressure condition ...
	I0120 12:36:13.679355  593695 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 12:36:13.679383  593695 node_conditions.go:123] node cpu capacity is 2
	I0120 12:36:13.679395  593695 node_conditions.go:105] duration metric: took 184.200741ms to run NodePressure ...
	I0120 12:36:13.679407  593695 start.go:241] waiting for startup goroutines ...
	I0120 12:36:13.679413  593695 start.go:246] waiting for cluster config update ...
	I0120 12:36:13.679423  593695 start.go:255] writing updated cluster config ...
	I0120 12:36:13.679733  593695 ssh_runner.go:195] Run: rm -f paused
	I0120 12:36:13.731412  593695 start.go:600] kubectl: 1.32.1, cluster: 1.32.0 (minor skew: 0)
	I0120 12:36:13.733373  593695 out.go:177] * Done! kubectl is now configured to use "custom-flannel-912009" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	5cf4af7a2d8ca       523cad1a4df73       4 minutes ago       Exited              dashboard-metrics-scraper   8                   06c61c21e245f       dashboard-metrics-scraper-86c6bf9756-vsd89
	17e498bec13d8       07655ddf2eebe       20 minutes ago      Running             kubernetes-dashboard        0                   c2dc07b18735a       kubernetes-dashboard-7779f9b69b-tcsgt
	81d92b6a56c07       6e38f40d628db       20 minutes ago      Running             storage-provisioner         0                   21c007d43c3b5       storage-provisioner
	76a885717143a       c69fa2e9cbf5f       20 minutes ago      Running             coredns                     0                   60f0b0896a631       coredns-668d6bf9bc-9xmv8
	e4e354bee1c02       c69fa2e9cbf5f       20 minutes ago      Running             coredns                     0                   0ae9eb49fb8bd       coredns-668d6bf9bc-wsnqr
	bb046d57f0b60       040f9f8aac8cd       20 minutes ago      Running             kube-proxy                  0                   108e5a42c5c32       kube-proxy-7mw9s
	e79e55fb70131       a389e107f4ff1       20 minutes ago      Running             kube-scheduler              2                   f2d16d62a70b6       kube-scheduler-no-preload-677886
	57f630813f13f       a9e7e6b294baf       20 minutes ago      Running             etcd                        2                   33d206163798c       etcd-no-preload-677886
	f7985b0045eb2       c2e17b8d0f4a3       20 minutes ago      Running             kube-apiserver              2                   2422f768df827       kube-apiserver-no-preload-677886
	857b30c51caac       8cab3d2a8bd0f       20 minutes ago      Running             kube-controller-manager     2                   3907591c5b2d9       kube-controller-manager-no-preload-677886
	
	
	==> containerd <==
	Jan 20 12:40:54 no-preload-677886 containerd[557]: time="2025-01-20T12:40:54.674656527Z" level=info msg="CreateContainer within sandbox \"06c61c21e245f21d22c7241510c19d05fc20da1c4b46effe147cd0c8adf1a148\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:7,} returns container id \"9aff5a1a01cc5e0f456cd075e02f2ee4a9760f6b840f90d370636b0f44b7e6af\""
	Jan 20 12:40:54 no-preload-677886 containerd[557]: time="2025-01-20T12:40:54.675781444Z" level=info msg="StartContainer for \"9aff5a1a01cc5e0f456cd075e02f2ee4a9760f6b840f90d370636b0f44b7e6af\""
	Jan 20 12:40:54 no-preload-677886 containerd[557]: time="2025-01-20T12:40:54.753746186Z" level=info msg="StartContainer for \"9aff5a1a01cc5e0f456cd075e02f2ee4a9760f6b840f90d370636b0f44b7e6af\" returns successfully"
	Jan 20 12:40:54 no-preload-677886 containerd[557]: time="2025-01-20T12:40:54.793602129Z" level=info msg="shim disconnected" id=9aff5a1a01cc5e0f456cd075e02f2ee4a9760f6b840f90d370636b0f44b7e6af namespace=k8s.io
	Jan 20 12:40:54 no-preload-677886 containerd[557]: time="2025-01-20T12:40:54.793707460Z" level=warning msg="cleaning up after shim disconnected" id=9aff5a1a01cc5e0f456cd075e02f2ee4a9760f6b840f90d370636b0f44b7e6af namespace=k8s.io
	Jan 20 12:40:54 no-preload-677886 containerd[557]: time="2025-01-20T12:40:54.793743803Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 20 12:40:54 no-preload-677886 containerd[557]: time="2025-01-20T12:40:54.919941071Z" level=info msg="RemoveContainer for \"5fc207c30fb37cc7662422bb462355a0b2a3325ea14f35acaecb5a3258661ebe\""
	Jan 20 12:40:54 no-preload-677886 containerd[557]: time="2025-01-20T12:40:54.935054987Z" level=info msg="RemoveContainer for \"5fc207c30fb37cc7662422bb462355a0b2a3325ea14f35acaecb5a3258661ebe\" returns successfully"
	Jan 20 12:40:57 no-preload-677886 containerd[557]: time="2025-01-20T12:40:57.653076809Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 20 12:40:57 no-preload-677886 containerd[557]: time="2025-01-20T12:40:57.675325437Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Jan 20 12:40:57 no-preload-677886 containerd[557]: time="2025-01-20T12:40:57.677659249Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Jan 20 12:40:57 no-preload-677886 containerd[557]: time="2025-01-20T12:40:57.677705767Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 20 12:46:01 no-preload-677886 containerd[557]: time="2025-01-20T12:46:01.655076259Z" level=info msg="CreateContainer within sandbox \"06c61c21e245f21d22c7241510c19d05fc20da1c4b46effe147cd0c8adf1a148\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,}"
	Jan 20 12:46:01 no-preload-677886 containerd[557]: time="2025-01-20T12:46:01.680430537Z" level=info msg="CreateContainer within sandbox \"06c61c21e245f21d22c7241510c19d05fc20da1c4b46effe147cd0c8adf1a148\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,} returns container id \"5cf4af7a2d8ca846f4e9e80a7426685e46f57b5564f052cbce0d35c4d6b215a9\""
	Jan 20 12:46:01 no-preload-677886 containerd[557]: time="2025-01-20T12:46:01.681879982Z" level=info msg="StartContainer for \"5cf4af7a2d8ca846f4e9e80a7426685e46f57b5564f052cbce0d35c4d6b215a9\""
	Jan 20 12:46:01 no-preload-677886 containerd[557]: time="2025-01-20T12:46:01.777313112Z" level=info msg="StartContainer for \"5cf4af7a2d8ca846f4e9e80a7426685e46f57b5564f052cbce0d35c4d6b215a9\" returns successfully"
	Jan 20 12:46:01 no-preload-677886 containerd[557]: time="2025-01-20T12:46:01.838372711Z" level=info msg="shim disconnected" id=5cf4af7a2d8ca846f4e9e80a7426685e46f57b5564f052cbce0d35c4d6b215a9 namespace=k8s.io
	Jan 20 12:46:01 no-preload-677886 containerd[557]: time="2025-01-20T12:46:01.838693055Z" level=warning msg="cleaning up after shim disconnected" id=5cf4af7a2d8ca846f4e9e80a7426685e46f57b5564f052cbce0d35c4d6b215a9 namespace=k8s.io
	Jan 20 12:46:01 no-preload-677886 containerd[557]: time="2025-01-20T12:46:01.838842383Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 20 12:46:02 no-preload-677886 containerd[557]: time="2025-01-20T12:46:02.674029185Z" level=info msg="RemoveContainer for \"9aff5a1a01cc5e0f456cd075e02f2ee4a9760f6b840f90d370636b0f44b7e6af\""
	Jan 20 12:46:02 no-preload-677886 containerd[557]: time="2025-01-20T12:46:02.686018126Z" level=info msg="RemoveContainer for \"9aff5a1a01cc5e0f456cd075e02f2ee4a9760f6b840f90d370636b0f44b7e6af\" returns successfully"
	Jan 20 12:46:07 no-preload-677886 containerd[557]: time="2025-01-20T12:46:07.652806490Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 20 12:46:07 no-preload-677886 containerd[557]: time="2025-01-20T12:46:07.672930998Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Jan 20 12:46:07 no-preload-677886 containerd[557]: time="2025-01-20T12:46:07.675268782Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Jan 20 12:46:07 no-preload-677886 containerd[557]: time="2025-01-20T12:46:07.675358695Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [76a885717143af6da5b22aad50e2f6b5cc735ca978b03ead96d09b801a042ff8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [e4e354bee1c02e245f4ee1aa584e4f9c33452a74cb3e59b6d4e1c4a23dbe13af] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-677886
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-677886
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77d80cf1517f5f1439721b28711982314b21bec9
	                    minikube.k8s.io/name=no-preload-677886
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_20T12_29_52_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Jan 2025 12:29:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-677886
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Jan 2025 12:50:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Jan 2025 12:48:32 +0000   Mon, 20 Jan 2025 12:29:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Jan 2025 12:48:32 +0000   Mon, 20 Jan 2025 12:29:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Jan 2025 12:48:32 +0000   Mon, 20 Jan 2025 12:29:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Jan 2025 12:48:32 +0000   Mon, 20 Jan 2025 12:29:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.157
	  Hostname:    no-preload-677886
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9a347a9ff01e4e74b3ae9e6ad1ac1fad
	  System UUID:                9a347a9f-f01e-4e74-b3ae-9e6ad1ac1fad
	  Boot ID:                    635a9d1b-a517-4374-bca0-3a9cf43ae5f1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-9xmv8                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     20m
	  kube-system                 coredns-668d6bf9bc-wsnqr                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     20m
	  kube-system                 etcd-no-preload-677886                        100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         20m
	  kube-system                 kube-apiserver-no-preload-677886              250m (12%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-no-preload-677886     200m (10%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-proxy-7mw9s                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-scheduler-no-preload-677886              100m (5%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 metrics-server-f79f97bbb-4c528                100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         20m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-vsd89    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-tcsgt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 20m   kube-proxy       
	  Normal  Starting                 20m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  20m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20m   kubelet          Node no-preload-677886 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m   kubelet          Node no-preload-677886 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m   kubelet          Node no-preload-677886 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           20m   node-controller  Node no-preload-677886 event: Registered Node no-preload-677886 in Controller
	
	
	==> dmesg <==
	[  +0.054857] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042833] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.122155] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.701509] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.682325] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.549216] systemd-fstab-generator[480]: Ignoring "noauto" option for root device
	[  +0.083447] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073960] systemd-fstab-generator[492]: Ignoring "noauto" option for root device
	[  +0.219134] systemd-fstab-generator[506]: Ignoring "noauto" option for root device
	[  +0.130041] systemd-fstab-generator[518]: Ignoring "noauto" option for root device
	[  +0.340041] systemd-fstab-generator[549]: Ignoring "noauto" option for root device
	[  +1.074118] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +2.160341] systemd-fstab-generator[750]: Ignoring "noauto" option for root device
	[  +1.103771] kauditd_printk_skb: 245 callbacks suppressed
	[  +5.165738] kauditd_printk_skb: 30 callbacks suppressed
	[ +12.441907] kauditd_printk_skb: 72 callbacks suppressed
	[Jan20 12:29] systemd-fstab-generator[3022]: Ignoring "noauto" option for root device
	[  +6.599149] systemd-fstab-generator[3390]: Ignoring "noauto" option for root device
	[  +0.100826] kauditd_printk_skb: 87 callbacks suppressed
	[  +4.461564] systemd-fstab-generator[3487]: Ignoring "noauto" option for root device
	[  +1.096108] kauditd_printk_skb: 34 callbacks suppressed
	[Jan20 12:30] kauditd_printk_skb: 90 callbacks suppressed
	[  +6.002434] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [57f630813f13f00958007f01fffdfbb131e4c40d6c4ca9d26a38b27dc1bb5ed5] <==
	{"level":"info","ts":"2025-01-20T12:35:04.530740Z","caller":"traceutil/trace.go:171","msg":"trace[502739809] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:849; }","duration":"203.743885ms","start":"2025-01-20T12:35:04.326977Z","end":"2025-01-20T12:35:04.530721Z","steps":["trace[502739809] 'range keys from in-memory index tree'  (duration: 202.507505ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T12:35:08.076076Z","caller":"traceutil/trace.go:171","msg":"trace[1805590902] transaction","detail":"{read_only:false; response_revision:854; number_of_response:1; }","duration":"215.268338ms","start":"2025-01-20T12:35:07.860786Z","end":"2025-01-20T12:35:08.076054Z","steps":["trace[1805590902] 'process raft request'  (duration: 214.867161ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T12:35:25.875692Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.260599ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1814550993203984827 > lease_revoke:<id:192e9483b0c17922>","response":"size:28"}
	{"level":"info","ts":"2025-01-20T12:35:25.876544Z","caller":"traceutil/trace.go:171","msg":"trace[1822305913] linearizableReadLoop","detail":"{readStateIndex:950; appliedIndex:948; }","duration":"101.185741ms","start":"2025-01-20T12:35:25.775343Z","end":"2025-01-20T12:35:25.876529Z","steps":["trace[1822305913] 'read index received'  (duration: 90.61518ms)","trace[1822305913] 'applied index is now lower than readState.Index'  (duration: 10.569489ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-20T12:35:25.877129Z","caller":"traceutil/trace.go:171","msg":"trace[1043700226] transaction","detail":"{read_only:false; response_revision:869; number_of_response:1; }","duration":"121.704233ms","start":"2025-01-20T12:35:25.755406Z","end":"2025-01-20T12:35:25.877110Z","steps":["trace[1043700226] 'process raft request'  (duration: 120.902471ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T12:35:25.877879Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.511357ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T12:35:25.877947Z","caller":"traceutil/trace.go:171","msg":"trace[801620436] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:869; }","duration":"102.619411ms","start":"2025-01-20T12:35:25.775317Z","end":"2025-01-20T12:35:25.877936Z","steps":["trace[801620436] 'agreement among raft nodes before linearized reading'  (duration: 101.291563ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T12:35:26.158550Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.117677ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T12:35:26.159042Z","caller":"traceutil/trace.go:171","msg":"trace[376424248] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:869; }","duration":"182.646818ms","start":"2025-01-20T12:35:25.976322Z","end":"2025-01-20T12:35:26.158968Z","steps":["trace[376424248] 'range keys from in-memory index tree'  (duration: 181.932153ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T12:35:32.698796Z","caller":"traceutil/trace.go:171","msg":"trace[1499265984] linearizableReadLoop","detail":"{readStateIndex:957; appliedIndex:956; }","duration":"123.768592ms","start":"2025-01-20T12:35:32.575005Z","end":"2025-01-20T12:35:32.698774Z","steps":["trace[1499265984] 'read index received'  (duration: 123.580369ms)","trace[1499265984] 'applied index is now lower than readState.Index'  (duration: 187.305µs)"],"step_count":2}
	{"level":"warn","ts":"2025-01-20T12:35:32.699009Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.976115ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T12:35:32.699039Z","caller":"traceutil/trace.go:171","msg":"trace[661945791] transaction","detail":"{read_only:false; response_revision:875; number_of_response:1; }","duration":"297.790763ms","start":"2025-01-20T12:35:32.401229Z","end":"2025-01-20T12:35:32.699020Z","steps":["trace[661945791] 'process raft request'  (duration: 297.376729ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T12:35:32.699050Z","caller":"traceutil/trace.go:171","msg":"trace[483635226] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:875; }","duration":"124.059903ms","start":"2025-01-20T12:35:32.574980Z","end":"2025-01-20T12:35:32.699039Z","steps":["trace[483635226] 'agreement among raft nodes before linearized reading'  (duration: 123.958256ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T12:35:33.133839Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"358.192543ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T12:35:33.133967Z","caller":"traceutil/trace.go:171","msg":"trace[1140872822] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:875; }","duration":"358.372372ms","start":"2025-01-20T12:35:32.775575Z","end":"2025-01-20T12:35:33.133948Z","steps":["trace[1140872822] 'range keys from in-memory index tree'  (duration: 358.120438ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T12:35:33.134031Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-20T12:35:32.775560Z","time spent":"358.444333ms","remote":"127.0.0.1:56836","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-01-20T12:39:47.396602Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":834}
	{"level":"info","ts":"2025-01-20T12:39:47.440197Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":834,"took":"42.630709ms","hash":1825579988,"current-db-size-bytes":3035136,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":3035136,"current-db-size-in-use":"3.0 MB"}
	{"level":"info","ts":"2025-01-20T12:39:47.440367Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1825579988,"revision":834,"compact-revision":-1}
	{"level":"info","ts":"2025-01-20T12:44:47.405763Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1085}
	{"level":"info","ts":"2025-01-20T12:44:47.410863Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1085,"took":"4.397719ms","hash":530143079,"current-db-size-bytes":3035136,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":1757184,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-20T12:44:47.411052Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":530143079,"revision":1085,"compact-revision":834}
	{"level":"info","ts":"2025-01-20T12:49:47.415315Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1337}
	{"level":"info","ts":"2025-01-20T12:49:47.420368Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1337,"took":"4.333345ms","hash":3473013662,"current-db-size-bytes":3035136,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":1802240,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-20T12:49:47.420478Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3473013662,"revision":1337,"compact-revision":1085}
	
	
	==> kernel <==
	 12:50:36 up 25 min,  0 users,  load average: 0.27, 0.33, 0.34
	Linux no-preload-677886 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f7985b0045eb2e8f6137597fe295b4f16ddea6cf369752b86b0769aa64dbcf2d] <==
	I0120 12:45:49.943515       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0120 12:45:49.943580       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0120 12:47:49.944601       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 12:47:49.944720       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0120 12:47:49.944807       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 12:47:49.944898       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0120 12:47:49.946001       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0120 12:47:49.946055       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0120 12:49:48.940420       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 12:49:48.940702       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0120 12:49:49.942206       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 12:49:49.942280       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0120 12:49:49.942356       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 12:49:49.942431       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0120 12:49:49.943456       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0120 12:49:49.943539       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [857b30c51caaca20624f74f6273daea5d9f5faa387927e88cb41e57658c008fb] <==
	E0120 12:45:55.728959       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:45:55.818653       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0120 12:46:02.694585       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="287.14µs"
	I0120 12:46:08.961259       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="159.436µs"
	I0120 12:46:20.667084       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="116.56µs"
	E0120 12:46:25.736017       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:46:25.826593       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0120 12:46:33.669925       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="99.808µs"
	E0120 12:46:55.743665       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:46:55.835354       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 12:47:25.751518       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:47:25.846895       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 12:47:55.758921       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:47:55.865629       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 12:48:25.765115       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:48:25.874999       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0120 12:48:32.756416       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-677886"
	E0120 12:48:55.771528       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:48:55.883592       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 12:49:25.778842       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:49:25.893444       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 12:49:55.786648       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:49:55.903027       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 12:50:25.794786       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:50:25.913505       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [bb046d57f0b60ac605653c0ad3f1d1884f34f7c2e35bbc278da86697c901a81a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0120 12:29:57.724249       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0120 12:29:57.796270       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.72.157"]
	E0120 12:29:57.796346       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0120 12:29:58.259194       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0120 12:29:58.259420       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0120 12:29:58.259548       1 server_linux.go:170] "Using iptables Proxier"
	I0120 12:29:58.282692       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0120 12:29:58.282963       1 server.go:497] "Version info" version="v1.32.0"
	I0120 12:29:58.282977       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0120 12:29:58.317220       1 config.go:199] "Starting service config controller"
	I0120 12:29:58.317250       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0120 12:29:58.317276       1 config.go:105] "Starting endpoint slice config controller"
	I0120 12:29:58.317280       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0120 12:29:58.326715       1 config.go:329] "Starting node config controller"
	I0120 12:29:58.326729       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0120 12:29:58.465517       1 shared_informer.go:320] Caches are synced for node config
	I0120 12:29:58.465588       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0120 12:29:58.465602       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [e79e55fb70131a8b68edddf89a87c3809690ef5705041693b06a7f7f621f088f] <==
	W0120 12:29:48.973835       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0120 12:29:48.973909       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 12:29:48.974226       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0120 12:29:48.974280       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 12:29:49.819868       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0120 12:29:49.820361       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 12:29:49.858472       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0120 12:29:49.859405       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0120 12:29:49.865195       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0120 12:29:49.865248       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 12:29:49.979497       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0120 12:29:49.979921       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0120 12:29:50.055332       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0120 12:29:50.055387       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 12:29:50.059664       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0120 12:29:50.060098       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 12:29:50.144969       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0120 12:29:50.145023       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0120 12:29:50.203965       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0120 12:29:50.204062       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 12:29:50.214364       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0120 12:29:50.214420       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 12:29:50.230114       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0120 12:29:50.230220       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0120 12:29:52.060525       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 20 12:49:12 no-preload-677886 kubelet[3397]: I0120 12:49:12.648949    3397 scope.go:117] "RemoveContainer" containerID="5cf4af7a2d8ca846f4e9e80a7426685e46f57b5564f052cbce0d35c4d6b215a9"
	Jan 20 12:49:12 no-preload-677886 kubelet[3397]: E0120 12:49:12.649518    3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-vsd89_kubernetes-dashboard(e8601724-da1c-4ada-9794-a7a65336042a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-vsd89" podUID="e8601724-da1c-4ada-9794-a7a65336042a"
	Jan 20 12:49:20 no-preload-677886 kubelet[3397]: E0120 12:49:20.650671    3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-4c528" podUID="c970f3ba-5f5c-4cc5-8a4e-99fb56ba8778"
	Jan 20 12:49:27 no-preload-677886 kubelet[3397]: I0120 12:49:27.649276    3397 scope.go:117] "RemoveContainer" containerID="5cf4af7a2d8ca846f4e9e80a7426685e46f57b5564f052cbce0d35c4d6b215a9"
	Jan 20 12:49:27 no-preload-677886 kubelet[3397]: E0120 12:49:27.650479    3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-vsd89_kubernetes-dashboard(e8601724-da1c-4ada-9794-a7a65336042a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-vsd89" podUID="e8601724-da1c-4ada-9794-a7a65336042a"
	Jan 20 12:49:32 no-preload-677886 kubelet[3397]: E0120 12:49:32.649214    3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-4c528" podUID="c970f3ba-5f5c-4cc5-8a4e-99fb56ba8778"
	Jan 20 12:49:39 no-preload-677886 kubelet[3397]: I0120 12:49:39.650725    3397 scope.go:117] "RemoveContainer" containerID="5cf4af7a2d8ca846f4e9e80a7426685e46f57b5564f052cbce0d35c4d6b215a9"
	Jan 20 12:49:39 no-preload-677886 kubelet[3397]: E0120 12:49:39.652075    3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-vsd89_kubernetes-dashboard(e8601724-da1c-4ada-9794-a7a65336042a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-vsd89" podUID="e8601724-da1c-4ada-9794-a7a65336042a"
	Jan 20 12:49:47 no-preload-677886 kubelet[3397]: E0120 12:49:47.650059    3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-4c528" podUID="c970f3ba-5f5c-4cc5-8a4e-99fb56ba8778"
	Jan 20 12:49:51 no-preload-677886 kubelet[3397]: E0120 12:49:51.672862    3397 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 20 12:49:51 no-preload-677886 kubelet[3397]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 20 12:49:51 no-preload-677886 kubelet[3397]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 20 12:49:51 no-preload-677886 kubelet[3397]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 20 12:49:51 no-preload-677886 kubelet[3397]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 20 12:49:54 no-preload-677886 kubelet[3397]: I0120 12:49:54.648770    3397 scope.go:117] "RemoveContainer" containerID="5cf4af7a2d8ca846f4e9e80a7426685e46f57b5564f052cbce0d35c4d6b215a9"
	Jan 20 12:49:54 no-preload-677886 kubelet[3397]: E0120 12:49:54.648999    3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-vsd89_kubernetes-dashboard(e8601724-da1c-4ada-9794-a7a65336042a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-vsd89" podUID="e8601724-da1c-4ada-9794-a7a65336042a"
	Jan 20 12:50:02 no-preload-677886 kubelet[3397]: E0120 12:50:02.650557    3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-4c528" podUID="c970f3ba-5f5c-4cc5-8a4e-99fb56ba8778"
	Jan 20 12:50:09 no-preload-677886 kubelet[3397]: I0120 12:50:09.649430    3397 scope.go:117] "RemoveContainer" containerID="5cf4af7a2d8ca846f4e9e80a7426685e46f57b5564f052cbce0d35c4d6b215a9"
	Jan 20 12:50:09 no-preload-677886 kubelet[3397]: E0120 12:50:09.650007    3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-vsd89_kubernetes-dashboard(e8601724-da1c-4ada-9794-a7a65336042a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-vsd89" podUID="e8601724-da1c-4ada-9794-a7a65336042a"
	Jan 20 12:50:16 no-preload-677886 kubelet[3397]: E0120 12:50:16.650249    3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-4c528" podUID="c970f3ba-5f5c-4cc5-8a4e-99fb56ba8778"
	Jan 20 12:50:20 no-preload-677886 kubelet[3397]: I0120 12:50:20.649575    3397 scope.go:117] "RemoveContainer" containerID="5cf4af7a2d8ca846f4e9e80a7426685e46f57b5564f052cbce0d35c4d6b215a9"
	Jan 20 12:50:20 no-preload-677886 kubelet[3397]: E0120 12:50:20.650305    3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-vsd89_kubernetes-dashboard(e8601724-da1c-4ada-9794-a7a65336042a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-vsd89" podUID="e8601724-da1c-4ada-9794-a7a65336042a"
	Jan 20 12:50:31 no-preload-677886 kubelet[3397]: E0120 12:50:31.650692    3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-4c528" podUID="c970f3ba-5f5c-4cc5-8a4e-99fb56ba8778"
	Jan 20 12:50:32 no-preload-677886 kubelet[3397]: I0120 12:50:32.648897    3397 scope.go:117] "RemoveContainer" containerID="5cf4af7a2d8ca846f4e9e80a7426685e46f57b5564f052cbce0d35c4d6b215a9"
	Jan 20 12:50:32 no-preload-677886 kubelet[3397]: E0120 12:50:32.649102    3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-vsd89_kubernetes-dashboard(e8601724-da1c-4ada-9794-a7a65336042a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-vsd89" podUID="e8601724-da1c-4ada-9794-a7a65336042a"
	
	
	==> kubernetes-dashboard [17e498bec13d87a58929ba35ccaf56c4211c87612834d20a30470458bc856e1a] <==
	2025/01/20 12:38:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:38:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:39:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:39:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:40:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:40:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:41:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:41:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:42:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:42:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:43:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:43:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:44:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:44:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:45:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:45:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:46:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:46:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:47:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:47:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:48:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:48:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:49:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:49:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:50:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [81d92b6a56c0744c4c3cc5e4db96cf8e4ecb0ce6ad938ce745291373662aaa95] <==
	I0120 12:29:59.089360       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0120 12:29:59.115241       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0120 12:29:59.115351       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0120 12:29:59.156780       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0120 12:29:59.159991       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bde922e1-103e-4ced-9936-c8f670e9c9a5", APIVersion:"v1", ResourceVersion:"417", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-677886_c9d0d7a0-f72f-4cf4-89f8-d0760e9dcde2 became leader
	I0120 12:29:59.160093       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-677886_c9d0d7a0-f72f-4cf4-89f8-d0760e9dcde2!
	I0120 12:29:59.267535       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-677886_c9d0d7a0-f72f-4cf4-89f8-d0760e9dcde2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-677886 -n no-preload-677886
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-677886 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-4c528
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-677886 describe pod metrics-server-f79f97bbb-4c528
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-677886 describe pod metrics-server-f79f97bbb-4c528: exit status 1 (63.810161ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-4c528" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-677886 describe pod metrics-server-f79f97bbb-4c528: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (1540.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (1639.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-565837 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.0
E0120 12:29:00.980555  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/old-k8s-version-808623/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:29:00.986986  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/old-k8s-version-808623/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:29:00.998458  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/old-k8s-version-808623/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:29:01.019906  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/old-k8s-version-808623/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:29:01.061460  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/old-k8s-version-808623/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:29:01.143582  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/old-k8s-version-808623/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:29:01.305036  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/old-k8s-version-808623/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:29:01.627065  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/old-k8s-version-808623/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:29:02.268503  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/old-k8s-version-808623/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:29:03.549921  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/old-k8s-version-808623/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:29:06.111326  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/old-k8s-version-808623/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:29:11.232717  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/old-k8s-version-808623/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:29:21.474101  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/old-k8s-version-808623/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p embed-certs-565837 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.0: signal: killed (27m17.520576294s)

                                                
                                                
-- stdout --
	* [embed-certs-565837] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20151
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20151-530330/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-530330/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "embed-certs-565837" primary control-plane node in "embed-certs-565837" cluster
	* Restarting existing kvm2 VM for "embed-certs-565837" ...
	* Preparing Kubernetes v1.32.0 on containerd 1.7.23 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-565837 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 12:28:47.538035  583738 out.go:345] Setting OutFile to fd 1 ...
	I0120 12:28:47.538144  583738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:28:47.538149  583738 out.go:358] Setting ErrFile to fd 2...
	I0120 12:28:47.538154  583738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:28:47.538377  583738 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-530330/.minikube/bin
	I0120 12:28:47.538941  583738 out.go:352] Setting JSON to false
	I0120 12:28:47.540012  583738 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":7869,"bootTime":1737368258,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 12:28:47.540082  583738 start.go:139] virtualization: kvm guest
	I0120 12:28:47.544013  583738 out.go:177] * [embed-certs-565837] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 12:28:47.545739  583738 out.go:177]   - MINIKUBE_LOCATION=20151
	I0120 12:28:47.545744  583738 notify.go:220] Checking for updates...
	I0120 12:28:47.547505  583738 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 12:28:47.549210  583738 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20151-530330/kubeconfig
	I0120 12:28:47.551337  583738 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-530330/.minikube
	I0120 12:28:47.553063  583738 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 12:28:47.554687  583738 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 12:28:47.556810  583738 config.go:182] Loaded profile config "embed-certs-565837": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 12:28:47.557288  583738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:28:47.557343  583738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:28:47.573153  583738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42609
	I0120 12:28:47.573653  583738 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:28:47.574272  583738 main.go:141] libmachine: Using API Version  1
	I0120 12:28:47.574295  583738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:28:47.574670  583738 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:28:47.574949  583738 main.go:141] libmachine: (embed-certs-565837) Calling .DriverName
	I0120 12:28:47.575237  583738 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 12:28:47.575571  583738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:28:47.575643  583738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:28:47.591282  583738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43279
	I0120 12:28:47.591701  583738 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:28:47.592268  583738 main.go:141] libmachine: Using API Version  1
	I0120 12:28:47.592292  583738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:28:47.592602  583738 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:28:47.592809  583738 main.go:141] libmachine: (embed-certs-565837) Calling .DriverName
	I0120 12:28:47.630002  583738 out.go:177] * Using the kvm2 driver based on existing profile
	I0120 12:28:47.631689  583738 start.go:297] selected driver: kvm2
	I0120 12:28:47.631708  583738 start.go:901] validating driver "kvm2" against &{Name:embed-certs-565837 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-565837 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.156 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:28:47.631819  583738 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 12:28:47.632512  583738 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:28:47.632602  583738 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20151-530330/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 12:28:47.648142  583738 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 12:28:47.648550  583738 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 12:28:47.648586  583738 cni.go:84] Creating CNI manager for ""
	I0120 12:28:47.648636  583738 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0120 12:28:47.648670  583738 start.go:340] cluster config:
	{Name:embed-certs-565837 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-565837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.156 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:28:47.648771  583738 iso.go:125] acquiring lock: {Name:mk734d848ce0e9a68d8d00ecbd0f5085f599b42f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:28:47.650865  583738 out.go:177] * Starting "embed-certs-565837" primary control-plane node in "embed-certs-565837" cluster
	I0120 12:28:47.652272  583738 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
	I0120 12:28:47.652319  583738 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20151-530330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-amd64.tar.lz4
	I0120 12:28:47.652328  583738 cache.go:56] Caching tarball of preloaded images
	I0120 12:28:47.652441  583738 preload.go:172] Found /home/jenkins/minikube-integration/20151-530330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0120 12:28:47.652453  583738 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on containerd
	I0120 12:28:47.652558  583738 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/embed-certs-565837/config.json ...
	I0120 12:28:47.652756  583738 start.go:360] acquireMachinesLock for embed-certs-565837: {Name:mkcd5f2d114897136dd2343f9fcf468e718657e6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 12:28:50.007108  583738 start.go:364] duration metric: took 2.354267408s to acquireMachinesLock for "embed-certs-565837"
	I0120 12:28:50.007184  583738 start.go:96] Skipping create...Using existing machine configuration
	I0120 12:28:50.007197  583738 fix.go:54] fixHost starting: 
	I0120 12:28:50.007706  583738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:28:50.007766  583738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:28:50.026117  583738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42135
	I0120 12:28:50.026657  583738 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:28:50.027198  583738 main.go:141] libmachine: Using API Version  1
	I0120 12:28:50.027217  583738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:28:50.027545  583738 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:28:50.027741  583738 main.go:141] libmachine: (embed-certs-565837) Calling .DriverName
	I0120 12:28:50.027890  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetState
	I0120 12:28:50.029424  583738 fix.go:112] recreateIfNeeded on embed-certs-565837: state=Stopped err=<nil>
	I0120 12:28:50.029445  583738 main.go:141] libmachine: (embed-certs-565837) Calling .DriverName
	W0120 12:28:50.029600  583738 fix.go:138] unexpected machine state, will restart: <nil>
	I0120 12:28:50.031831  583738 out.go:177] * Restarting existing kvm2 VM for "embed-certs-565837" ...
	I0120 12:28:50.033288  583738 main.go:141] libmachine: (embed-certs-565837) Calling .Start
	I0120 12:28:50.033477  583738 main.go:141] libmachine: (embed-certs-565837) starting domain...
	I0120 12:28:50.033501  583738 main.go:141] libmachine: (embed-certs-565837) ensuring networks are active...
	I0120 12:28:50.034271  583738 main.go:141] libmachine: (embed-certs-565837) Ensuring network default is active
	I0120 12:28:50.034731  583738 main.go:141] libmachine: (embed-certs-565837) Ensuring network mk-embed-certs-565837 is active
	I0120 12:28:50.035187  583738 main.go:141] libmachine: (embed-certs-565837) getting domain XML...
	I0120 12:28:50.036005  583738 main.go:141] libmachine: (embed-certs-565837) creating domain...
	I0120 12:28:51.437991  583738 main.go:141] libmachine: (embed-certs-565837) waiting for IP...
	I0120 12:28:51.438840  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:28:51.439255  583738 main.go:141] libmachine: (embed-certs-565837) DBG | unable to find current IP address of domain embed-certs-565837 in network mk-embed-certs-565837
	I0120 12:28:51.439323  583738 main.go:141] libmachine: (embed-certs-565837) DBG | I0120 12:28:51.439215  583819 retry.go:31] will retry after 225.098304ms: waiting for domain to come up
	I0120 12:28:51.665631  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:28:51.666400  583738 main.go:141] libmachine: (embed-certs-565837) DBG | unable to find current IP address of domain embed-certs-565837 in network mk-embed-certs-565837
	I0120 12:28:51.666456  583738 main.go:141] libmachine: (embed-certs-565837) DBG | I0120 12:28:51.666239  583819 retry.go:31] will retry after 338.814565ms: waiting for domain to come up
	I0120 12:28:52.006891  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:28:52.007659  583738 main.go:141] libmachine: (embed-certs-565837) DBG | unable to find current IP address of domain embed-certs-565837 in network mk-embed-certs-565837
	I0120 12:28:52.007715  583738 main.go:141] libmachine: (embed-certs-565837) DBG | I0120 12:28:52.007622  583819 retry.go:31] will retry after 457.961499ms: waiting for domain to come up
	I0120 12:28:52.467419  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:28:52.468014  583738 main.go:141] libmachine: (embed-certs-565837) DBG | unable to find current IP address of domain embed-certs-565837 in network mk-embed-certs-565837
	I0120 12:28:52.468069  583738 main.go:141] libmachine: (embed-certs-565837) DBG | I0120 12:28:52.468001  583819 retry.go:31] will retry after 500.99497ms: waiting for domain to come up
	I0120 12:28:52.970487  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:28:52.970942  583738 main.go:141] libmachine: (embed-certs-565837) DBG | unable to find current IP address of domain embed-certs-565837 in network mk-embed-certs-565837
	I0120 12:28:52.970961  583738 main.go:141] libmachine: (embed-certs-565837) DBG | I0120 12:28:52.970926  583819 retry.go:31] will retry after 741.40208ms: waiting for domain to come up
	I0120 12:28:53.714502  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:28:53.715145  583738 main.go:141] libmachine: (embed-certs-565837) DBG | unable to find current IP address of domain embed-certs-565837 in network mk-embed-certs-565837
	I0120 12:28:53.715176  583738 main.go:141] libmachine: (embed-certs-565837) DBG | I0120 12:28:53.715127  583819 retry.go:31] will retry after 742.795607ms: waiting for domain to come up
	I0120 12:28:54.460324  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:28:54.460883  583738 main.go:141] libmachine: (embed-certs-565837) DBG | unable to find current IP address of domain embed-certs-565837 in network mk-embed-certs-565837
	I0120 12:28:54.460943  583738 main.go:141] libmachine: (embed-certs-565837) DBG | I0120 12:28:54.460886  583819 retry.go:31] will retry after 1.006619367s: waiting for domain to come up
	I0120 12:28:55.469650  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:28:55.470312  583738 main.go:141] libmachine: (embed-certs-565837) DBG | unable to find current IP address of domain embed-certs-565837 in network mk-embed-certs-565837
	I0120 12:28:55.470345  583738 main.go:141] libmachine: (embed-certs-565837) DBG | I0120 12:28:55.470232  583819 retry.go:31] will retry after 907.660317ms: waiting for domain to come up
	I0120 12:28:56.380482  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:28:56.381003  583738 main.go:141] libmachine: (embed-certs-565837) DBG | unable to find current IP address of domain embed-certs-565837 in network mk-embed-certs-565837
	I0120 12:28:56.381035  583738 main.go:141] libmachine: (embed-certs-565837) DBG | I0120 12:28:56.380967  583819 retry.go:31] will retry after 1.577595686s: waiting for domain to come up
	I0120 12:28:57.960567  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:28:57.961155  583738 main.go:141] libmachine: (embed-certs-565837) DBG | unable to find current IP address of domain embed-certs-565837 in network mk-embed-certs-565837
	I0120 12:28:57.961178  583738 main.go:141] libmachine: (embed-certs-565837) DBG | I0120 12:28:57.961114  583819 retry.go:31] will retry after 1.778495992s: waiting for domain to come up
	I0120 12:28:59.741030  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:28:59.741498  583738 main.go:141] libmachine: (embed-certs-565837) DBG | unable to find current IP address of domain embed-certs-565837 in network mk-embed-certs-565837
	I0120 12:28:59.741550  583738 main.go:141] libmachine: (embed-certs-565837) DBG | I0120 12:28:59.741481  583819 retry.go:31] will retry after 2.691517059s: waiting for domain to come up
	I0120 12:29:02.435519  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:29:02.436108  583738 main.go:141] libmachine: (embed-certs-565837) DBG | unable to find current IP address of domain embed-certs-565837 in network mk-embed-certs-565837
	I0120 12:29:02.436146  583738 main.go:141] libmachine: (embed-certs-565837) DBG | I0120 12:29:02.436083  583819 retry.go:31] will retry after 2.675103424s: waiting for domain to come up
	I0120 12:29:05.112530  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:29:05.113065  583738 main.go:141] libmachine: (embed-certs-565837) DBG | unable to find current IP address of domain embed-certs-565837 in network mk-embed-certs-565837
	I0120 12:29:05.113098  583738 main.go:141] libmachine: (embed-certs-565837) DBG | I0120 12:29:05.113013  583819 retry.go:31] will retry after 4.111792315s: waiting for domain to come up
	I0120 12:29:09.227413  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:29:09.228028  583738 main.go:141] libmachine: (embed-certs-565837) found domain IP: 192.168.39.156
	I0120 12:29:09.228058  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has current primary IP address 192.168.39.156 and MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:29:09.228067  583738 main.go:141] libmachine: (embed-certs-565837) reserving static IP address...
	I0120 12:29:09.228466  583738 main.go:141] libmachine: (embed-certs-565837) DBG | found host DHCP lease matching {name: "embed-certs-565837", mac: "52:54:00:8a:b7:35", ip: "192.168.39.156"} in network mk-embed-certs-565837: {Iface:virbr1 ExpiryTime:2025-01-20 13:26:22 +0000 UTC Type:0 Mac:52:54:00:8a:b7:35 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:embed-certs-565837 Clientid:01:52:54:00:8a:b7:35}
	I0120 12:29:09.228503  583738 main.go:141] libmachine: (embed-certs-565837) reserved static IP address 192.168.39.156 for domain embed-certs-565837
	I0120 12:29:09.228526  583738 main.go:141] libmachine: (embed-certs-565837) DBG | skip adding static IP to network mk-embed-certs-565837 - found existing host DHCP lease matching {name: "embed-certs-565837", mac: "52:54:00:8a:b7:35", ip: "192.168.39.156"}
	I0120 12:29:09.228546  583738 main.go:141] libmachine: (embed-certs-565837) DBG | Getting to WaitForSSH function...
	I0120 12:29:09.228564  583738 main.go:141] libmachine: (embed-certs-565837) waiting for SSH...
	I0120 12:29:09.231232  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:29:09.231555  583738 main.go:141] libmachine: (embed-certs-565837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:b7:35", ip: ""} in network mk-embed-certs-565837: {Iface:virbr1 ExpiryTime:2025-01-20 13:26:22 +0000 UTC Type:0 Mac:52:54:00:8a:b7:35 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:embed-certs-565837 Clientid:01:52:54:00:8a:b7:35}
	I0120 12:29:09.231585  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined IP address 192.168.39.156 and MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:29:09.231722  583738 main.go:141] libmachine: (embed-certs-565837) DBG | Using SSH client type: external
	I0120 12:29:09.231748  583738 main.go:141] libmachine: (embed-certs-565837) DBG | Using SSH private key: /home/jenkins/minikube-integration/20151-530330/.minikube/machines/embed-certs-565837/id_rsa (-rw-------)
	I0120 12:29:09.231817  583738 main.go:141] libmachine: (embed-certs-565837) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.156 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20151-530330/.minikube/machines/embed-certs-565837/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 12:29:09.231835  583738 main.go:141] libmachine: (embed-certs-565837) DBG | About to run SSH command:
	I0120 12:29:09.231847  583738 main.go:141] libmachine: (embed-certs-565837) DBG | exit 0
	I0120 12:29:09.366964  583738 main.go:141] libmachine: (embed-certs-565837) DBG | SSH cmd err, output: <nil>: 
	I0120 12:29:09.367427  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetConfigRaw
	I0120 12:29:09.368241  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetIP
	I0120 12:29:09.371060  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:29:09.371488  583738 main.go:141] libmachine: (embed-certs-565837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:b7:35", ip: ""} in network mk-embed-certs-565837: {Iface:virbr1 ExpiryTime:2025-01-20 13:26:22 +0000 UTC Type:0 Mac:52:54:00:8a:b7:35 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:embed-certs-565837 Clientid:01:52:54:00:8a:b7:35}
	I0120 12:29:09.371520  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined IP address 192.168.39.156 and MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:29:09.371803  583738 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/embed-certs-565837/config.json ...
	I0120 12:29:09.372010  583738 machine.go:93] provisionDockerMachine start ...
	I0120 12:29:09.372034  583738 main.go:141] libmachine: (embed-certs-565837) Calling .DriverName
	I0120 12:29:09.372287  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHHostname
	I0120 12:29:09.374818  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:29:09.375224  583738 main.go:141] libmachine: (embed-certs-565837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:b7:35", ip: ""} in network mk-embed-certs-565837: {Iface:virbr1 ExpiryTime:2025-01-20 13:26:22 +0000 UTC Type:0 Mac:52:54:00:8a:b7:35 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:embed-certs-565837 Clientid:01:52:54:00:8a:b7:35}
	I0120 12:29:09.375251  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined IP address 192.168.39.156 and MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:29:09.375451  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHPort
	I0120 12:29:09.375654  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHKeyPath
	I0120 12:29:09.375844  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHKeyPath
	I0120 12:29:09.375997  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHUsername
	I0120 12:29:09.376179  583738 main.go:141] libmachine: Using SSH client type: native
	I0120 12:29:09.376444  583738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I0120 12:29:09.376464  583738 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 12:29:09.494809  583738 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0120 12:29:09.494843  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetMachineName
	I0120 12:29:09.495109  583738 buildroot.go:166] provisioning hostname "embed-certs-565837"
	I0120 12:29:09.495144  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetMachineName
	I0120 12:29:09.495383  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHHostname
	I0120 12:29:09.498756  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:29:09.499112  583738 main.go:141] libmachine: (embed-certs-565837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:b7:35", ip: ""} in network mk-embed-certs-565837: {Iface:virbr1 ExpiryTime:2025-01-20 13:26:22 +0000 UTC Type:0 Mac:52:54:00:8a:b7:35 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:embed-certs-565837 Clientid:01:52:54:00:8a:b7:35}
	I0120 12:29:09.499145  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined IP address 192.168.39.156 and MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:29:09.499286  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHPort
	I0120 12:29:09.499472  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHKeyPath
	I0120 12:29:09.499657  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHKeyPath
	I0120 12:29:09.499811  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHUsername
	I0120 12:29:09.499976  583738 main.go:141] libmachine: Using SSH client type: native
	I0120 12:29:09.500155  583738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I0120 12:29:09.500171  583738 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-565837 && echo "embed-certs-565837" | sudo tee /etc/hostname
	I0120 12:29:09.632383  583738 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-565837
	
	I0120 12:29:09.632413  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHHostname
	I0120 12:29:09.635525  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:29:09.635839  583738 main.go:141] libmachine: (embed-certs-565837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:b7:35", ip: ""} in network mk-embed-certs-565837: {Iface:virbr1 ExpiryTime:2025-01-20 13:26:22 +0000 UTC Type:0 Mac:52:54:00:8a:b7:35 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:embed-certs-565837 Clientid:01:52:54:00:8a:b7:35}
	I0120 12:29:09.635863  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined IP address 192.168.39.156 and MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:29:09.636066  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHPort
	I0120 12:29:09.636278  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHKeyPath
	I0120 12:29:09.636451  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHKeyPath
	I0120 12:29:09.636590  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHUsername
	I0120 12:29:09.636717  583738 main.go:141] libmachine: Using SSH client type: native
	I0120 12:29:09.636922  583738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I0120 12:29:09.636941  583738 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-565837' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-565837/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-565837' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 12:29:09.762276  583738 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 12:29:09.762340  583738 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20151-530330/.minikube CaCertPath:/home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20151-530330/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20151-530330/.minikube}
	I0120 12:29:09.762372  583738 buildroot.go:174] setting up certificates
	I0120 12:29:09.762385  583738 provision.go:84] configureAuth start
	I0120 12:29:09.762399  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetMachineName
	I0120 12:29:09.762716  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetIP
	I0120 12:29:09.765870  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:29:09.766309  583738 main.go:141] libmachine: (embed-certs-565837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:b7:35", ip: ""} in network mk-embed-certs-565837: {Iface:virbr1 ExpiryTime:2025-01-20 13:26:22 +0000 UTC Type:0 Mac:52:54:00:8a:b7:35 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:embed-certs-565837 Clientid:01:52:54:00:8a:b7:35}
	I0120 12:29:09.766343  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined IP address 192.168.39.156 and MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:29:09.766555  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHHostname
	I0120 12:29:09.768829  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:29:09.769234  583738 main.go:141] libmachine: (embed-certs-565837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:b7:35", ip: ""} in network mk-embed-certs-565837: {Iface:virbr1 ExpiryTime:2025-01-20 13:26:22 +0000 UTC Type:0 Mac:52:54:00:8a:b7:35 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:embed-certs-565837 Clientid:01:52:54:00:8a:b7:35}
	I0120 12:29:09.769281  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined IP address 192.168.39.156 and MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:29:09.769445  583738 provision.go:143] copyHostCerts
	I0120 12:29:09.769513  583738 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-530330/.minikube/key.pem, removing ...
	I0120 12:29:09.769534  583738 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-530330/.minikube/key.pem
	I0120 12:29:09.769599  583738 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20151-530330/.minikube/key.pem (1675 bytes)
	I0120 12:29:09.769683  583738 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-530330/.minikube/ca.pem, removing ...
	I0120 12:29:09.769692  583738 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-530330/.minikube/ca.pem
	I0120 12:29:09.769714  583738 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20151-530330/.minikube/ca.pem (1078 bytes)
	I0120 12:29:09.769762  583738 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-530330/.minikube/cert.pem, removing ...
	I0120 12:29:09.769769  583738 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-530330/.minikube/cert.pem
	I0120 12:29:09.769785  583738 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20151-530330/.minikube/cert.pem (1123 bytes)
	I0120 12:29:09.769864  583738 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20151-530330/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca-key.pem org=jenkins.embed-certs-565837 san=[127.0.0.1 192.168.39.156 embed-certs-565837 localhost minikube]
	I0120 12:29:09.886402  583738 provision.go:177] copyRemoteCerts
	I0120 12:29:09.886464  583738 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 12:29:09.886499  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHHostname
	I0120 12:29:09.889375  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:29:09.889779  583738 main.go:141] libmachine: (embed-certs-565837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:b7:35", ip: ""} in network mk-embed-certs-565837: {Iface:virbr1 ExpiryTime:2025-01-20 13:26:22 +0000 UTC Type:0 Mac:52:54:00:8a:b7:35 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:embed-certs-565837 Clientid:01:52:54:00:8a:b7:35}
	I0120 12:29:09.889824  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined IP address 192.168.39.156 and MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:29:09.890059  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHPort
	I0120 12:29:09.890248  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHKeyPath
	I0120 12:29:09.890417  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHUsername
	I0120 12:29:09.890532  583738 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/embed-certs-565837/id_rsa Username:docker}
	I0120 12:29:09.981008  583738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0120 12:29:10.008311  583738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0120 12:29:10.034579  583738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0120 12:29:10.061074  583738 provision.go:87] duration metric: took 298.672877ms to configureAuth
	I0120 12:29:10.061106  583738 buildroot.go:189] setting minikube options for container-runtime
	I0120 12:29:10.061336  583738 config.go:182] Loaded profile config "embed-certs-565837": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 12:29:10.061355  583738 machine.go:96] duration metric: took 689.328892ms to provisionDockerMachine
	I0120 12:29:10.061368  583738 start.go:293] postStartSetup for "embed-certs-565837" (driver="kvm2")
	I0120 12:29:10.061381  583738 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 12:29:10.061423  583738 main.go:141] libmachine: (embed-certs-565837) Calling .DriverName
	I0120 12:29:10.061841  583738 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 12:29:10.061878  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHHostname
	I0120 12:29:10.065025  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:29:10.065389  583738 main.go:141] libmachine: (embed-certs-565837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:b7:35", ip: ""} in network mk-embed-certs-565837: {Iface:virbr1 ExpiryTime:2025-01-20 13:26:22 +0000 UTC Type:0 Mac:52:54:00:8a:b7:35 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:embed-certs-565837 Clientid:01:52:54:00:8a:b7:35}
	I0120 12:29:10.065437  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined IP address 192.168.39.156 and MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:29:10.065551  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHPort
	I0120 12:29:10.065765  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHKeyPath
	I0120 12:29:10.065929  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHUsername
	I0120 12:29:10.066080  583738 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/embed-certs-565837/id_rsa Username:docker}
	I0120 12:29:10.158354  583738 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 12:29:10.162804  583738 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 12:29:10.162831  583738 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-530330/.minikube/addons for local assets ...
	I0120 12:29:10.162904  583738 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-530330/.minikube/files for local assets ...
	I0120 12:29:10.162999  583738 filesync.go:149] local asset: /home/jenkins/minikube-integration/20151-530330/.minikube/files/etc/ssl/certs/5375812.pem -> 5375812.pem in /etc/ssl/certs
	I0120 12:29:10.163140  583738 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 12:29:10.175033  583738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/files/etc/ssl/certs/5375812.pem --> /etc/ssl/certs/5375812.pem (1708 bytes)
	I0120 12:29:10.202960  583738 start.go:296] duration metric: took 141.575388ms for postStartSetup
	I0120 12:29:10.203068  583738 fix.go:56] duration metric: took 20.195869655s for fixHost
	I0120 12:29:10.203134  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHHostname
	I0120 12:29:10.206076  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:29:10.206519  583738 main.go:141] libmachine: (embed-certs-565837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:b7:35", ip: ""} in network mk-embed-certs-565837: {Iface:virbr1 ExpiryTime:2025-01-20 13:26:22 +0000 UTC Type:0 Mac:52:54:00:8a:b7:35 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:embed-certs-565837 Clientid:01:52:54:00:8a:b7:35}
	I0120 12:29:10.206551  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined IP address 192.168.39.156 and MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:29:10.206800  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHPort
	I0120 12:29:10.207018  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHKeyPath
	I0120 12:29:10.207210  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHKeyPath
	I0120 12:29:10.207350  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHUsername
	I0120 12:29:10.207503  583738 main.go:141] libmachine: Using SSH client type: native
	I0120 12:29:10.207761  583738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I0120 12:29:10.207782  583738 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 12:29:10.322981  583738 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737376150.299515678
	
	I0120 12:29:10.323012  583738 fix.go:216] guest clock: 1737376150.299515678
	I0120 12:29:10.323022  583738 fix.go:229] Guest: 2025-01-20 12:29:10.299515678 +0000 UTC Remote: 2025-01-20 12:29:10.203088807 +0000 UTC m=+22.704052517 (delta=96.426871ms)
	I0120 12:29:10.323050  583738 fix.go:200] guest clock delta is within tolerance: 96.426871ms
	I0120 12:29:10.323057  583738 start.go:83] releasing machines lock for "embed-certs-565837", held for 20.315904002s
	I0120 12:29:10.323082  583738 main.go:141] libmachine: (embed-certs-565837) Calling .DriverName
	I0120 12:29:10.323418  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetIP
	I0120 12:29:10.326291  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:29:10.326882  583738 main.go:141] libmachine: (embed-certs-565837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:b7:35", ip: ""} in network mk-embed-certs-565837: {Iface:virbr1 ExpiryTime:2025-01-20 13:26:22 +0000 UTC Type:0 Mac:52:54:00:8a:b7:35 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:embed-certs-565837 Clientid:01:52:54:00:8a:b7:35}
	I0120 12:29:10.326942  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined IP address 192.168.39.156 and MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:29:10.327101  583738 main.go:141] libmachine: (embed-certs-565837) Calling .DriverName
	I0120 12:29:10.327638  583738 main.go:141] libmachine: (embed-certs-565837) Calling .DriverName
	I0120 12:29:10.327853  583738 main.go:141] libmachine: (embed-certs-565837) Calling .DriverName
	I0120 12:29:10.327977  583738 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 12:29:10.328032  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHHostname
	I0120 12:29:10.328094  583738 ssh_runner.go:195] Run: cat /version.json
	I0120 12:29:10.328131  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHHostname
	I0120 12:29:10.331171  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:29:10.331369  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:29:10.331575  583738 main.go:141] libmachine: (embed-certs-565837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:b7:35", ip: ""} in network mk-embed-certs-565837: {Iface:virbr1 ExpiryTime:2025-01-20 13:26:22 +0000 UTC Type:0 Mac:52:54:00:8a:b7:35 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:embed-certs-565837 Clientid:01:52:54:00:8a:b7:35}
	I0120 12:29:10.331609  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined IP address 192.168.39.156 and MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:29:10.331736  583738 main.go:141] libmachine: (embed-certs-565837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:b7:35", ip: ""} in network mk-embed-certs-565837: {Iface:virbr1 ExpiryTime:2025-01-20 13:26:22 +0000 UTC Type:0 Mac:52:54:00:8a:b7:35 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:embed-certs-565837 Clientid:01:52:54:00:8a:b7:35}
	I0120 12:29:10.331743  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHPort
	I0120 12:29:10.331763  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined IP address 192.168.39.156 and MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:29:10.331956  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHKeyPath
	I0120 12:29:10.331966  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHPort
	I0120 12:29:10.332155  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHKeyPath
	I0120 12:29:10.332157  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHUsername
	I0120 12:29:10.332328  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHUsername
	I0120 12:29:10.332323  583738 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/embed-certs-565837/id_rsa Username:docker}
	I0120 12:29:10.332498  583738 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/embed-certs-565837/id_rsa Username:docker}
	I0120 12:29:10.415295  583738 ssh_runner.go:195] Run: systemctl --version
	I0120 12:29:10.441278  583738 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 12:29:10.448358  583738 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 12:29:10.448454  583738 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 12:29:10.466696  583738 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 12:29:10.466723  583738 start.go:495] detecting cgroup driver to use...
	I0120 12:29:10.466802  583738 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0120 12:29:10.500460  583738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0120 12:29:10.515396  583738 docker.go:217] disabling cri-docker service (if available) ...
	I0120 12:29:10.515465  583738 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 12:29:10.530393  583738 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 12:29:10.545026  583738 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 12:29:10.674923  583738 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 12:29:10.872383  583738 docker.go:233] disabling docker service ...
	I0120 12:29:10.872494  583738 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 12:29:10.890492  583738 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 12:29:10.905411  583738 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 12:29:11.041485  583738 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 12:29:11.182107  583738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 12:29:11.197766  583738 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 12:29:11.219691  583738 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0120 12:29:11.231405  583738 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0120 12:29:11.244023  583738 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0120 12:29:11.244121  583738 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0120 12:29:11.256315  583738 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0120 12:29:11.273507  583738 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0120 12:29:11.286257  583738 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0120 12:29:11.298530  583738 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 12:29:11.311511  583738 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0120 12:29:11.323553  583738 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0120 12:29:11.336197  583738 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0120 12:29:11.351009  583738 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 12:29:11.363842  583738 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 12:29:11.363914  583738 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 12:29:11.377364  583738 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 12:29:11.387758  583738 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:29:11.528002  583738 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0120 12:29:11.559547  583738 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0120 12:29:11.559608  583738 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0120 12:29:11.564857  583738 retry.go:31] will retry after 1.01356143s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0120 12:29:12.579343  583738 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0120 12:29:12.585761  583738 start.go:563] Will wait 60s for crictl version
	I0120 12:29:12.585977  583738 ssh_runner.go:195] Run: which crictl
	I0120 12:29:12.590715  583738 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 12:29:12.643682  583738 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0120 12:29:12.643798  583738 ssh_runner.go:195] Run: containerd --version
	I0120 12:29:12.679248  583738 ssh_runner.go:195] Run: containerd --version
	I0120 12:29:12.716474  583738 out.go:177] * Preparing Kubernetes v1.32.0 on containerd 1.7.23 ...
	I0120 12:29:12.717795  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetIP
	I0120 12:29:12.721603  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:29:12.722108  583738 main.go:141] libmachine: (embed-certs-565837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:b7:35", ip: ""} in network mk-embed-certs-565837: {Iface:virbr1 ExpiryTime:2025-01-20 13:26:22 +0000 UTC Type:0 Mac:52:54:00:8a:b7:35 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:embed-certs-565837 Clientid:01:52:54:00:8a:b7:35}
	I0120 12:29:12.722140  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined IP address 192.168.39.156 and MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:29:12.722404  583738 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0120 12:29:12.729149  583738 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 12:29:12.746683  583738 kubeadm.go:883] updating cluster {Name:embed-certs-565837 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-565837 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.156 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 12:29:12.746847  583738 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
	I0120 12:29:12.746911  583738 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:29:12.787281  583738 containerd.go:627] all images are preloaded for containerd runtime.
	I0120 12:29:12.787317  583738 containerd.go:534] Images already preloaded, skipping extraction
	I0120 12:29:12.787404  583738 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:29:12.822118  583738 containerd.go:627] all images are preloaded for containerd runtime.
	I0120 12:29:12.822150  583738 cache_images.go:84] Images are preloaded, skipping loading
	I0120 12:29:12.822160  583738 kubeadm.go:934] updating node { 192.168.39.156 8443 v1.32.0 containerd true true} ...
	I0120 12:29:12.822313  583738 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-565837 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.156
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:embed-certs-565837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 12:29:12.822386  583738 ssh_runner.go:195] Run: sudo crictl info
	I0120 12:29:12.861090  583738 cni.go:84] Creating CNI manager for ""
	I0120 12:29:12.861114  583738 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0120 12:29:12.861126  583738 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 12:29:12.861151  583738 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.156 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-565837 NodeName:embed-certs-565837 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.156"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.156 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 12:29:12.861268  583738 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.156
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-565837"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.156"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.156"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 12:29:12.861338  583738 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 12:29:12.872317  583738 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 12:29:12.872389  583738 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 12:29:12.885214  583738 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0120 12:29:12.908136  583738 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 12:29:12.937947  583738 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2314 bytes)
	I0120 12:29:12.968140  583738 ssh_runner.go:195] Run: grep 192.168.39.156	control-plane.minikube.internal$ /etc/hosts
	I0120 12:29:12.974836  583738 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.156	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 12:29:12.990252  583738 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:29:13.123374  583738 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:29:13.146792  583738 certs.go:68] Setting up /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/embed-certs-565837 for IP: 192.168.39.156
	I0120 12:29:13.146822  583738 certs.go:194] generating shared ca certs ...
	I0120 12:29:13.146844  583738 certs.go:226] acquiring lock for ca certs: {Name:mk52c62007c989bdf47cf8ee68bb49e4d4d8996b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:29:13.147045  583738 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20151-530330/.minikube/ca.key
	I0120 12:29:13.147116  583738 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20151-530330/.minikube/proxy-client-ca.key
	I0120 12:29:13.147133  583738 certs.go:256] generating profile certs ...
	I0120 12:29:13.147306  583738 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/embed-certs-565837/client.key
	I0120 12:29:13.147404  583738 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/embed-certs-565837/apiserver.key.1afee0da
	I0120 12:29:13.147455  583738 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/embed-certs-565837/proxy-client.key
	I0120 12:29:13.147625  583738 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/537581.pem (1338 bytes)
	W0120 12:29:13.147671  583738 certs.go:480] ignoring /home/jenkins/minikube-integration/20151-530330/.minikube/certs/537581_empty.pem, impossibly tiny 0 bytes
	I0120 12:29:13.147685  583738 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca-key.pem (1679 bytes)
	I0120 12:29:13.147717  583738 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca.pem (1078 bytes)
	I0120 12:29:13.147754  583738 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/cert.pem (1123 bytes)
	I0120 12:29:13.147795  583738 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/key.pem (1675 bytes)
	I0120 12:29:13.147860  583738 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-530330/.minikube/files/etc/ssl/certs/5375812.pem (1708 bytes)
	I0120 12:29:13.148788  583738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 12:29:13.211131  583738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0120 12:29:13.244341  583738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 12:29:13.288704  583738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0120 12:29:13.328466  583738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/embed-certs-565837/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0120 12:29:13.373332  583738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/embed-certs-565837/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0120 12:29:13.418607  583738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/embed-certs-565837/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 12:29:13.445936  583738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/embed-certs-565837/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 12:29:13.475288  583738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/certs/537581.pem --> /usr/share/ca-certificates/537581.pem (1338 bytes)
	I0120 12:29:13.502368  583738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/files/etc/ssl/certs/5375812.pem --> /usr/share/ca-certificates/5375812.pem (1708 bytes)
	I0120 12:29:13.526802  583738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 12:29:13.555300  583738 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 12:29:13.577143  583738 ssh_runner.go:195] Run: openssl version
	I0120 12:29:13.583929  583738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537581.pem && ln -fs /usr/share/ca-certificates/537581.pem /etc/ssl/certs/537581.pem"
	I0120 12:29:13.597630  583738 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537581.pem
	I0120 12:29:13.602712  583738 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 11:24 /usr/share/ca-certificates/537581.pem
	I0120 12:29:13.602778  583738 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537581.pem
	I0120 12:29:13.609442  583738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537581.pem /etc/ssl/certs/51391683.0"
	I0120 12:29:13.622603  583738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5375812.pem && ln -fs /usr/share/ca-certificates/5375812.pem /etc/ssl/certs/5375812.pem"
	I0120 12:29:13.636198  583738 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5375812.pem
	I0120 12:29:13.641599  583738 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 11:24 /usr/share/ca-certificates/5375812.pem
	I0120 12:29:13.641678  583738 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5375812.pem
	I0120 12:29:13.648830  583738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5375812.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 12:29:13.662660  583738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 12:29:13.677255  583738 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:29:13.683024  583738 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 11:16 /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:29:13.683106  583738 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:29:13.689899  583738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 12:29:13.703548  583738 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 12:29:13.708644  583738 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 12:29:13.715698  583738 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 12:29:13.722450  583738 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 12:29:13.729862  583738 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 12:29:13.737622  583738 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 12:29:13.744670  583738 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 12:29:13.753641  583738 kubeadm.go:392] StartCluster: {Name:embed-certs-565837 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-565837 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.156 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:29:13.753782  583738 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0120 12:29:13.753883  583738 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 12:29:13.797788  583738 cri.go:89] found id: "19fb9ea1b265d8f744eb32d379ae95fe5028902165c1af9cf67956cef76f7ae4"
	I0120 12:29:13.797833  583738 cri.go:89] found id: "7d37229c1f468fbd2b7503def9f712a91d86d73d5a3b1f4f05392904265b1bb2"
	I0120 12:29:13.797841  583738 cri.go:89] found id: "a17d7c8bbae6cfb25bc682e22d31319aa216a0047055e8ee5169eba612dcf4c1"
	I0120 12:29:13.797846  583738 cri.go:89] found id: "f770d9bd2f9993f9284c2f35f7fcbad18dfa62faa6242e221132031c45f4d2d8"
	I0120 12:29:13.797850  583738 cri.go:89] found id: "284aa6aa060ae7259f458cb6a0197fb795201bdcc1fd4f57e61f40fa69c00966"
	I0120 12:29:13.797854  583738 cri.go:89] found id: "3bd06536b079c9bcbcd5930fe501a2f027ff7f95fd3dfe181d62b51d58556a66"
	I0120 12:29:13.797858  583738 cri.go:89] found id: "dfe42ec016987131360b80170c9cb65c61322ff15912414be9627b0a3c728060"
	I0120 12:29:13.797888  583738 cri.go:89] found id: ""
	I0120 12:29:13.797955  583738 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0120 12:29:13.818020  583738 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-20T12:29:13Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0120 12:29:13.818144  583738 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 12:29:13.832603  583738 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0120 12:29:13.832637  583738 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0120 12:29:13.832698  583738 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0120 12:29:13.844031  583738 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0120 12:29:13.845254  583738 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-565837" does not appear in /home/jenkins/minikube-integration/20151-530330/kubeconfig
	I0120 12:29:13.846046  583738 kubeconfig.go:62] /home/jenkins/minikube-integration/20151-530330/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-565837" cluster setting kubeconfig missing "embed-certs-565837" context setting]
	I0120 12:29:13.847123  583738 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-530330/kubeconfig: {Name:mk113e13541afa8413ea8a359169b0824f5f9ef4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:29:13.849520  583738 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0120 12:29:13.860645  583738 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.156
	I0120 12:29:13.860685  583738 kubeadm.go:1160] stopping kube-system containers ...
	I0120 12:29:13.860700  583738 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0120 12:29:13.860768  583738 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 12:29:13.914323  583738 cri.go:89] found id: "19fb9ea1b265d8f744eb32d379ae95fe5028902165c1af9cf67956cef76f7ae4"
	I0120 12:29:13.914352  583738 cri.go:89] found id: "7d37229c1f468fbd2b7503def9f712a91d86d73d5a3b1f4f05392904265b1bb2"
	I0120 12:29:13.914359  583738 cri.go:89] found id: "a17d7c8bbae6cfb25bc682e22d31319aa216a0047055e8ee5169eba612dcf4c1"
	I0120 12:29:13.914373  583738 cri.go:89] found id: "f770d9bd2f9993f9284c2f35f7fcbad18dfa62faa6242e221132031c45f4d2d8"
	I0120 12:29:13.914378  583738 cri.go:89] found id: "284aa6aa060ae7259f458cb6a0197fb795201bdcc1fd4f57e61f40fa69c00966"
	I0120 12:29:13.914382  583738 cri.go:89] found id: "3bd06536b079c9bcbcd5930fe501a2f027ff7f95fd3dfe181d62b51d58556a66"
	I0120 12:29:13.914387  583738 cri.go:89] found id: "dfe42ec016987131360b80170c9cb65c61322ff15912414be9627b0a3c728060"
	I0120 12:29:13.914391  583738 cri.go:89] found id: ""
	I0120 12:29:13.914398  583738 cri.go:252] Stopping containers: [19fb9ea1b265d8f744eb32d379ae95fe5028902165c1af9cf67956cef76f7ae4 7d37229c1f468fbd2b7503def9f712a91d86d73d5a3b1f4f05392904265b1bb2 a17d7c8bbae6cfb25bc682e22d31319aa216a0047055e8ee5169eba612dcf4c1 f770d9bd2f9993f9284c2f35f7fcbad18dfa62faa6242e221132031c45f4d2d8 284aa6aa060ae7259f458cb6a0197fb795201bdcc1fd4f57e61f40fa69c00966 3bd06536b079c9bcbcd5930fe501a2f027ff7f95fd3dfe181d62b51d58556a66 dfe42ec016987131360b80170c9cb65c61322ff15912414be9627b0a3c728060]
	I0120 12:29:13.914478  583738 ssh_runner.go:195] Run: which crictl
	I0120 12:29:13.919224  583738 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 19fb9ea1b265d8f744eb32d379ae95fe5028902165c1af9cf67956cef76f7ae4 7d37229c1f468fbd2b7503def9f712a91d86d73d5a3b1f4f05392904265b1bb2 a17d7c8bbae6cfb25bc682e22d31319aa216a0047055e8ee5169eba612dcf4c1 f770d9bd2f9993f9284c2f35f7fcbad18dfa62faa6242e221132031c45f4d2d8 284aa6aa060ae7259f458cb6a0197fb795201bdcc1fd4f57e61f40fa69c00966 3bd06536b079c9bcbcd5930fe501a2f027ff7f95fd3dfe181d62b51d58556a66 dfe42ec016987131360b80170c9cb65c61322ff15912414be9627b0a3c728060
	I0120 12:29:13.965333  583738 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0120 12:29:13.984980  583738 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:29:13.996649  583738 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:29:13.996674  583738 kubeadm.go:157] found existing configuration files:
	
	I0120 12:29:13.996729  583738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 12:29:14.006885  583738 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:29:14.006962  583738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:29:14.016850  583738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 12:29:14.026580  583738 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:29:14.026653  583738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:29:14.036614  583738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 12:29:14.046888  583738 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:29:14.046958  583738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:29:14.058585  583738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 12:29:14.071814  583738 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:29:14.071880  583738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:29:14.085612  583738 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 12:29:14.100178  583738 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:29:14.258413  583738 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:29:15.505105  583738 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.246647697s)
	I0120 12:29:15.505157  583738 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:29:15.732241  583738 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:29:15.802587  583738 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:29:15.901046  583738 api_server.go:52] waiting for apiserver process to appear ...
	I0120 12:29:15.901147  583738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:29:16.401408  583738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:29:16.902140  583738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:29:16.931999  583738 api_server.go:72] duration metric: took 1.03095059s to wait for apiserver process to appear ...
	I0120 12:29:16.932035  583738 api_server.go:88] waiting for apiserver healthz status ...
	I0120 12:29:16.932063  583738 api_server.go:253] Checking apiserver healthz at https://192.168.39.156:8443/healthz ...
	I0120 12:29:16.932794  583738 api_server.go:269] stopped: https://192.168.39.156:8443/healthz: Get "https://192.168.39.156:8443/healthz": dial tcp 192.168.39.156:8443: connect: connection refused
	I0120 12:29:17.432357  583738 api_server.go:253] Checking apiserver healthz at https://192.168.39.156:8443/healthz ...
	I0120 12:29:19.699423  583738 api_server.go:279] https://192.168.39.156:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0120 12:29:19.699456  583738 api_server.go:103] status: https://192.168.39.156:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0120 12:29:19.699477  583738 api_server.go:253] Checking apiserver healthz at https://192.168.39.156:8443/healthz ...
	I0120 12:29:19.751807  583738 api_server.go:279] https://192.168.39.156:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0120 12:29:19.751845  583738 api_server.go:103] status: https://192.168.39.156:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0120 12:29:19.932186  583738 api_server.go:253] Checking apiserver healthz at https://192.168.39.156:8443/healthz ...
	I0120 12:29:19.938230  583738 api_server.go:279] https://192.168.39.156:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 12:29:19.938272  583738 api_server.go:103] status: https://192.168.39.156:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 12:29:20.432125  583738 api_server.go:253] Checking apiserver healthz at https://192.168.39.156:8443/healthz ...
	I0120 12:29:20.437103  583738 api_server.go:279] https://192.168.39.156:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 12:29:20.437134  583738 api_server.go:103] status: https://192.168.39.156:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 12:29:20.932909  583738 api_server.go:253] Checking apiserver healthz at https://192.168.39.156:8443/healthz ...
	I0120 12:29:20.940912  583738 api_server.go:279] https://192.168.39.156:8443/healthz returned 200:
	ok
	I0120 12:29:20.948033  583738 api_server.go:141] control plane version: v1.32.0
	I0120 12:29:20.948068  583738 api_server.go:131] duration metric: took 4.016024309s to wait for apiserver health ...
	I0120 12:29:20.948082  583738 cni.go:84] Creating CNI manager for ""
	I0120 12:29:20.948090  583738 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0120 12:29:20.949872  583738 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 12:29:20.951212  583738 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 12:29:20.967888  583738 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 12:29:20.996264  583738 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 12:29:21.021625  583738 system_pods.go:59] 8 kube-system pods found
	I0120 12:29:21.021661  583738 system_pods.go:61] "coredns-668d6bf9bc-jpcfj" [a90e9e1c-aff5-4b1f-ba03-23de5d38cb97] Running
	I0120 12:29:21.021676  583738 system_pods.go:61] "etcd-embed-certs-565837" [a097c031-77f6-46da-99e1-b69f508798ac] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0120 12:29:21.021687  583738 system_pods.go:61] "kube-apiserver-embed-certs-565837" [651f2547-4649-4bea-86d0-28203824fc09] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0120 12:29:21.021697  583738 system_pods.go:61] "kube-controller-manager-embed-certs-565837" [70ae0325-95fd-4cce-84b6-d8e777070138] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0120 12:29:21.021704  583738 system_pods.go:61] "kube-proxy-xpznf" [31c4a75e-edee-4470-bd71-3411f79ca95a] Running
	I0120 12:29:21.021715  583738 system_pods.go:61] "kube-scheduler-embed-certs-565837" [64d7ad1e-68ff-4f67-a5de-2c9858084fa1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0120 12:29:21.021727  583738 system_pods.go:61] "metrics-server-f79f97bbb-cjm2p" [ab7e131a-c2a9-43fb-9377-011fc428b947] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 12:29:21.021739  583738 system_pods.go:61] "storage-provisioner" [0e34a269-828f-4549-bc32-dacb7d752065] Running
	I0120 12:29:21.021747  583738 system_pods.go:74] duration metric: took 25.453145ms to wait for pod list to return data ...
	I0120 12:29:21.021761  583738 node_conditions.go:102] verifying NodePressure condition ...
	I0120 12:29:21.031516  583738 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 12:29:21.031549  583738 node_conditions.go:123] node cpu capacity is 2
	I0120 12:29:21.031571  583738 node_conditions.go:105] duration metric: took 9.804543ms to run NodePressure ...
	I0120 12:29:21.031591  583738 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:29:21.387544  583738 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0120 12:29:21.394991  583738 kubeadm.go:739] kubelet initialised
	I0120 12:29:21.395024  583738 kubeadm.go:740] duration metric: took 7.447079ms waiting for restarted kubelet to initialise ...
	I0120 12:29:21.395037  583738 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:29:21.408728  583738 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-jpcfj" in "kube-system" namespace to be "Ready" ...
	I0120 12:29:23.421486  583738 pod_ready.go:103] pod "coredns-668d6bf9bc-jpcfj" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:24.418038  583738 pod_ready.go:93] pod "coredns-668d6bf9bc-jpcfj" in "kube-system" namespace has status "Ready":"True"
	I0120 12:29:24.418071  583738 pod_ready.go:82] duration metric: took 3.009311676s for pod "coredns-668d6bf9bc-jpcfj" in "kube-system" namespace to be "Ready" ...
	I0120 12:29:24.418086  583738 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-565837" in "kube-system" namespace to be "Ready" ...
	I0120 12:29:26.425633  583738 pod_ready.go:103] pod "etcd-embed-certs-565837" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:28.427318  583738 pod_ready.go:103] pod "etcd-embed-certs-565837" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:30.924479  583738 pod_ready.go:103] pod "etcd-embed-certs-565837" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:32.925260  583738 pod_ready.go:103] pod "etcd-embed-certs-565837" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:33.925207  583738 pod_ready.go:93] pod "etcd-embed-certs-565837" in "kube-system" namespace has status "Ready":"True"
	I0120 12:29:33.925232  583738 pod_ready.go:82] duration metric: took 9.507138602s for pod "etcd-embed-certs-565837" in "kube-system" namespace to be "Ready" ...
	I0120 12:29:33.925242  583738 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-565837" in "kube-system" namespace to be "Ready" ...
	I0120 12:29:33.930622  583738 pod_ready.go:93] pod "kube-apiserver-embed-certs-565837" in "kube-system" namespace has status "Ready":"True"
	I0120 12:29:33.930648  583738 pod_ready.go:82] duration metric: took 5.399507ms for pod "kube-apiserver-embed-certs-565837" in "kube-system" namespace to be "Ready" ...
	I0120 12:29:33.930658  583738 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-565837" in "kube-system" namespace to be "Ready" ...
	I0120 12:29:33.935115  583738 pod_ready.go:93] pod "kube-controller-manager-embed-certs-565837" in "kube-system" namespace has status "Ready":"True"
	I0120 12:29:33.935142  583738 pod_ready.go:82] duration metric: took 4.475692ms for pod "kube-controller-manager-embed-certs-565837" in "kube-system" namespace to be "Ready" ...
	I0120 12:29:33.935155  583738 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xpznf" in "kube-system" namespace to be "Ready" ...
	I0120 12:29:33.939451  583738 pod_ready.go:93] pod "kube-proxy-xpznf" in "kube-system" namespace has status "Ready":"True"
	I0120 12:29:33.939475  583738 pod_ready.go:82] duration metric: took 4.311496ms for pod "kube-proxy-xpznf" in "kube-system" namespace to be "Ready" ...
	I0120 12:29:33.939487  583738 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-565837" in "kube-system" namespace to be "Ready" ...
	I0120 12:29:33.943579  583738 pod_ready.go:93] pod "kube-scheduler-embed-certs-565837" in "kube-system" namespace has status "Ready":"True"
	I0120 12:29:33.943600  583738 pod_ready.go:82] duration metric: took 4.103996ms for pod "kube-scheduler-embed-certs-565837" in "kube-system" namespace to be "Ready" ...
	I0120 12:29:33.943611  583738 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace to be "Ready" ...
	I0120 12:29:35.958479  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:38.451918  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:40.952385  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:43.450629  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:45.451851  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:47.951579  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:49.951898  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:51.951996  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:54.451378  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:56.951846  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:58.960632  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:01.452137  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:03.950772  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:05.952008  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:07.954621  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:09.956671  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:12.451790  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:14.951874  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:17.451066  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:19.950022  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:21.950359  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:23.951675  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:26.450844  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:28.452392  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:30.951074  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:33.450984  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:35.952322  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:37.952761  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:40.450590  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:42.450672  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:44.949920  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:47.455354  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:49.958733  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:52.450484  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:54.451214  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:56.949879  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:58.950456  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:00.951478  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:03.450380  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:05.451499  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:07.950622  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:10.456036  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:12.951121  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:14.951465  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:16.952089  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:19.449931  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:21.451316  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:23.950828  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:25.951356  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:28.451325  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:30.950829  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:33.451253  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:35.951827  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:38.452066  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:40.951209  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:42.952999  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:45.450988  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:47.452048  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:49.452725  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:51.952243  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:54.452450  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:56.951723  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:58.952154  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:01.451788  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:03.950183  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:05.951349  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:08.452899  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:10.953701  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:13.081680  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:15.451536  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:17.952143  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:19.956600  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:21.959949  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:24.450844  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:26.950311  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:28.951351  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:31.451105  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:33.452442  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:35.953711  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:38.451836  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:40.951495  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:42.953685  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:45.452088  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:47.950370  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:49.951904  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:51.952107  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:54.451950  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:56.462028  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:58.951783  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:01.450661  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:03.451949  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:05.952678  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:08.450568  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:10.451935  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:12.452356  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:14.950846  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:16.951673  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:19.451039  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:21.951279  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:24.451599  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:26.451809  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:28.952314  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:31.451687  583738 pod_ready.go:103] pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:33.944041  583738 pod_ready.go:82] duration metric: took 4m0.000395552s for pod "metrics-server-f79f97bbb-cjm2p" in "kube-system" namespace to be "Ready" ...
	E0120 12:33:33.944080  583738 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0120 12:33:33.944100  583738 pod_ready.go:39] duration metric: took 4m12.549051453s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:33:33.944132  583738 kubeadm.go:597] duration metric: took 4m20.111488302s to restartPrimaryControlPlane
	W0120 12:33:33.944198  583738 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0120 12:33:33.944231  583738 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0120 12:33:35.957618  583738 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.013358605s)
	I0120 12:33:35.957706  583738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 12:33:35.974268  583738 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 12:33:35.987231  583738 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:33:36.001880  583738 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:33:36.001914  583738 kubeadm.go:157] found existing configuration files:
	
	I0120 12:33:36.001974  583738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 12:33:36.016081  583738 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:33:36.016161  583738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:33:36.028418  583738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 12:33:36.040517  583738 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:33:36.040593  583738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:33:36.052887  583738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 12:33:36.063786  583738 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:33:36.063859  583738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:33:36.075163  583738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 12:33:36.087544  583738 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:33:36.087621  583738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:33:36.100858  583738 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 12:33:36.162507  583738 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 12:33:36.162612  583738 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 12:33:36.284106  583738 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 12:33:36.284287  583738 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 12:33:36.284427  583738 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 12:33:36.291873  583738 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 12:33:36.294140  583738 out.go:235]   - Generating certificates and keys ...
	I0120 12:33:36.294268  583738 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 12:33:36.294364  583738 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 12:33:36.294479  583738 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 12:33:36.294564  583738 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 12:33:36.294672  583738 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 12:33:36.294761  583738 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 12:33:36.294850  583738 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 12:33:36.294938  583738 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 12:33:36.295043  583738 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 12:33:36.295165  583738 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 12:33:36.295243  583738 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 12:33:36.295327  583738 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 12:33:36.394239  583738 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 12:33:36.518375  583738 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 12:33:36.882701  583738 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 12:33:37.076557  583738 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 12:33:37.327085  583738 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 12:33:37.327958  583738 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 12:33:37.332196  583738 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 12:33:37.333823  583738 out.go:235]   - Booting up control plane ...
	I0120 12:33:37.333958  583738 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 12:33:37.334065  583738 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 12:33:37.334969  583738 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 12:33:37.367288  583738 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 12:33:37.376078  583738 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 12:33:37.376208  583738 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 12:33:37.578249  583738 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 12:33:37.578409  583738 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 12:33:38.579698  583738 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001744834s
	I0120 12:33:38.579850  583738 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 12:33:43.581335  583738 kubeadm.go:310] [api-check] The API server is healthy after 5.001497708s
	I0120 12:33:43.598427  583738 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 12:33:43.618959  583738 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 12:33:43.645655  583738 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 12:33:43.645894  583738 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-565837 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 12:33:43.657250  583738 kubeadm.go:310] [bootstrap-token] Using token: a7s94o.dk16hedvcwwf2gmf
	I0120 12:33:43.658897  583738 out.go:235]   - Configuring RBAC rules ...
	I0120 12:33:43.659068  583738 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 12:33:43.664687  583738 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 12:33:43.675358  583738 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 12:33:43.680308  583738 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 12:33:43.685586  583738 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 12:33:43.692306  583738 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 12:33:43.991626  583738 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 12:33:44.424415  583738 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 12:33:44.990769  583738 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 12:33:44.991719  583738 kubeadm.go:310] 
	I0120 12:33:44.991818  583738 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 12:33:44.991831  583738 kubeadm.go:310] 
	I0120 12:33:44.991952  583738 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 12:33:44.991993  583738 kubeadm.go:310] 
	I0120 12:33:44.992041  583738 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 12:33:44.992124  583738 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 12:33:44.992193  583738 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 12:33:44.992204  583738 kubeadm.go:310] 
	I0120 12:33:44.992291  583738 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 12:33:44.992303  583738 kubeadm.go:310] 
	I0120 12:33:44.992363  583738 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 12:33:44.992373  583738 kubeadm.go:310] 
	I0120 12:33:44.992443  583738 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 12:33:44.992567  583738 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 12:33:44.992674  583738 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 12:33:44.992687  583738 kubeadm.go:310] 
	I0120 12:33:44.992814  583738 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 12:33:44.992928  583738 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 12:33:44.992955  583738 kubeadm.go:310] 
	I0120 12:33:44.993088  583738 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a7s94o.dk16hedvcwwf2gmf \
	I0120 12:33:44.993279  583738 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:326640d5f51fa6eddf3fd6f2b38f5a08d4608620129e8898d45359839be856c3 \
	I0120 12:33:44.993330  583738 kubeadm.go:310] 	--control-plane 
	I0120 12:33:44.993348  583738 kubeadm.go:310] 
	I0120 12:33:44.993504  583738 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 12:33:44.993522  583738 kubeadm.go:310] 
	I0120 12:33:44.993630  583738 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a7s94o.dk16hedvcwwf2gmf \
	I0120 12:33:44.993822  583738 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:326640d5f51fa6eddf3fd6f2b38f5a08d4608620129e8898d45359839be856c3 
	I0120 12:33:44.994529  583738 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 12:33:44.994727  583738 cni.go:84] Creating CNI manager for ""
	I0120 12:33:44.994747  583738 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0120 12:33:44.996725  583738 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 12:33:44.998162  583738 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 12:33:45.010790  583738 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 12:33:45.033335  583738 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 12:33:45.033408  583738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:45.033432  583738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-565837 minikube.k8s.io/updated_at=2025_01_20T12_33_45_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=77d80cf1517f5f1439721b28711982314b21bec9 minikube.k8s.io/name=embed-certs-565837 minikube.k8s.io/primary=true
	I0120 12:33:45.254952  583738 ops.go:34] apiserver oom_adj: -16
	I0120 12:33:45.255399  583738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:45.756078  583738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:46.255774  583738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:46.755871  583738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:47.255523  583738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:47.756485  583738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:48.255909  583738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:48.755805  583738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:49.256159  583738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:49.369213  583738 kubeadm.go:1113] duration metric: took 4.335865811s to wait for elevateKubeSystemPrivileges
	I0120 12:33:49.369258  583738 kubeadm.go:394] duration metric: took 4m35.615639072s to StartCluster
	I0120 12:33:49.369287  583738 settings.go:142] acquiring lock: {Name:mkbafde306c71e7b8958e2377ddfa5a9e3a59113 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:33:49.369421  583738 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20151-530330/kubeconfig
	I0120 12:33:49.370934  583738 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-530330/kubeconfig: {Name:mk113e13541afa8413ea8a359169b0824f5f9ef4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:33:49.371284  583738 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.156 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0120 12:33:49.371421  583738 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 12:33:49.371520  583738 config.go:182] Loaded profile config "embed-certs-565837": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 12:33:49.371556  583738 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-565837"
	I0120 12:33:49.371579  583738 addons.go:69] Setting default-storageclass=true in profile "embed-certs-565837"
	I0120 12:33:49.371586  583738 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-565837"
	W0120 12:33:49.371598  583738 addons.go:247] addon storage-provisioner should already be in state true
	I0120 12:33:49.371608  583738 addons.go:69] Setting dashboard=true in profile "embed-certs-565837"
	I0120 12:33:49.371612  583738 addons.go:69] Setting metrics-server=true in profile "embed-certs-565837"
	I0120 12:33:49.371626  583738 addons.go:238] Setting addon dashboard=true in "embed-certs-565837"
	I0120 12:33:49.371632  583738 addons.go:238] Setting addon metrics-server=true in "embed-certs-565837"
	W0120 12:33:49.371636  583738 addons.go:247] addon dashboard should already be in state true
	W0120 12:33:49.371641  583738 addons.go:247] addon metrics-server should already be in state true
	I0120 12:33:49.371643  583738 host.go:66] Checking if "embed-certs-565837" exists ...
	I0120 12:33:49.371668  583738 host.go:66] Checking if "embed-certs-565837" exists ...
	I0120 12:33:49.371670  583738 host.go:66] Checking if "embed-certs-565837" exists ...
	I0120 12:33:49.371597  583738 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-565837"
	I0120 12:33:49.372091  583738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:33:49.372099  583738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:33:49.372137  583738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:49.372141  583738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:33:49.372182  583738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:49.372199  583738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:33:49.372237  583738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:49.372249  583738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:49.373081  583738 out.go:177] * Verifying Kubernetes components...
	I0120 12:33:49.374903  583738 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:33:49.390577  583738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40261
	I0120 12:33:49.391329  583738 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:49.391947  583738 main.go:141] libmachine: Using API Version  1
	I0120 12:33:49.391975  583738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:49.392060  583738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41361
	I0120 12:33:49.392402  583738 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:49.392479  583738 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:49.392813  583738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37127
	I0120 12:33:49.393012  583738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:33:49.393053  583738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:49.393066  583738 main.go:141] libmachine: Using API Version  1
	I0120 12:33:49.393084  583738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:49.393149  583738 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:49.393424  583738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35023
	I0120 12:33:49.393627  583738 main.go:141] libmachine: Using API Version  1
	I0120 12:33:49.393655  583738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:49.393904  583738 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:49.394107  583738 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:49.394222  583738 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:49.394290  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetState
	I0120 12:33:49.394458  583738 main.go:141] libmachine: Using API Version  1
	I0120 12:33:49.394480  583738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:49.394868  583738 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:49.394876  583738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:33:49.394912  583738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:49.395621  583738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:33:49.395670  583738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:49.398662  583738 addons.go:238] Setting addon default-storageclass=true in "embed-certs-565837"
	W0120 12:33:49.398687  583738 addons.go:247] addon default-storageclass should already be in state true
	I0120 12:33:49.398724  583738 host.go:66] Checking if "embed-certs-565837" exists ...
	I0120 12:33:49.399098  583738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:33:49.399150  583738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:49.415460  583738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46579
	I0120 12:33:49.416370  583738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34297
	I0120 12:33:49.416400  583738 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:49.416710  583738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39509
	I0120 12:33:49.417106  583738 main.go:141] libmachine: Using API Version  1
	I0120 12:33:49.417133  583738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:49.417213  583738 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:49.417299  583738 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:49.417675  583738 main.go:141] libmachine: Using API Version  1
	I0120 12:33:49.417701  583738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:49.417837  583738 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:49.418084  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetState
	I0120 12:33:49.418092  583738 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:49.418531  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetState
	I0120 12:33:49.418736  583738 main.go:141] libmachine: Using API Version  1
	I0120 12:33:49.418749  583738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:49.419094  583738 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:49.419373  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetState
	I0120 12:33:49.419828  583738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41407
	I0120 12:33:49.420396  583738 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:49.420853  583738 main.go:141] libmachine: Using API Version  1
	I0120 12:33:49.420865  583738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:49.421024  583738 main.go:141] libmachine: (embed-certs-565837) Calling .DriverName
	I0120 12:33:49.421488  583738 main.go:141] libmachine: (embed-certs-565837) Calling .DriverName
	I0120 12:33:49.421876  583738 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:49.422099  583738 main.go:141] libmachine: (embed-certs-565837) Calling .DriverName
	I0120 12:33:49.422649  583738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:33:49.422730  583738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:49.423085  583738 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0120 12:33:49.423096  583738 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0120 12:33:49.424027  583738 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:33:49.424848  583738 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 12:33:49.424870  583738 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 12:33:49.424894  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHHostname
	I0120 12:33:49.425664  583738 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0120 12:33:49.425857  583738 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:33:49.425910  583738 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 12:33:49.425951  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHHostname
	I0120 12:33:49.428909  583738 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0120 12:33:49.428931  583738 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0120 12:33:49.428952  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHHostname
	I0120 12:33:49.429623  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:33:49.430179  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:33:49.430226  583738 main.go:141] libmachine: (embed-certs-565837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:b7:35", ip: ""} in network mk-embed-certs-565837: {Iface:virbr1 ExpiryTime:2025-01-20 13:26:22 +0000 UTC Type:0 Mac:52:54:00:8a:b7:35 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:embed-certs-565837 Clientid:01:52:54:00:8a:b7:35}
	I0120 12:33:49.430249  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined IP address 192.168.39.156 and MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:33:49.430353  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHPort
	I0120 12:33:49.430712  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHKeyPath
	I0120 12:33:49.430889  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHUsername
	I0120 12:33:49.431018  583738 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/embed-certs-565837/id_rsa Username:docker}
	I0120 12:33:49.431699  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHPort
	I0120 12:33:49.431728  583738 main.go:141] libmachine: (embed-certs-565837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:b7:35", ip: ""} in network mk-embed-certs-565837: {Iface:virbr1 ExpiryTime:2025-01-20 13:26:22 +0000 UTC Type:0 Mac:52:54:00:8a:b7:35 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:embed-certs-565837 Clientid:01:52:54:00:8a:b7:35}
	I0120 12:33:49.431759  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined IP address 192.168.39.156 and MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:33:49.431866  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHKeyPath
	I0120 12:33:49.432179  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHUsername
	I0120 12:33:49.432344  583738 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/embed-certs-565837/id_rsa Username:docker}
	I0120 12:33:49.432688  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:33:49.432893  583738 main.go:141] libmachine: (embed-certs-565837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:b7:35", ip: ""} in network mk-embed-certs-565837: {Iface:virbr1 ExpiryTime:2025-01-20 13:26:22 +0000 UTC Type:0 Mac:52:54:00:8a:b7:35 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:embed-certs-565837 Clientid:01:52:54:00:8a:b7:35}
	I0120 12:33:49.432917  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined IP address 192.168.39.156 and MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:33:49.433017  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHPort
	I0120 12:33:49.433178  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHKeyPath
	I0120 12:33:49.433287  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHUsername
	I0120 12:33:49.433395  583738 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/embed-certs-565837/id_rsa Username:docker}
	I0120 12:33:49.448462  583738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40027
	I0120 12:33:49.448982  583738 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:49.449666  583738 main.go:141] libmachine: Using API Version  1
	I0120 12:33:49.449689  583738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:49.450032  583738 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:49.450255  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetState
	I0120 12:33:49.451923  583738 main.go:141] libmachine: (embed-certs-565837) Calling .DriverName
	I0120 12:33:49.452174  583738 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 12:33:49.452193  583738 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 12:33:49.452213  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHHostname
	I0120 12:33:49.455488  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:33:49.455983  583738 main.go:141] libmachine: (embed-certs-565837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:b7:35", ip: ""} in network mk-embed-certs-565837: {Iface:virbr1 ExpiryTime:2025-01-20 13:26:22 +0000 UTC Type:0 Mac:52:54:00:8a:b7:35 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:embed-certs-565837 Clientid:01:52:54:00:8a:b7:35}
	I0120 12:33:49.456052  583738 main.go:141] libmachine: (embed-certs-565837) DBG | domain embed-certs-565837 has defined IP address 192.168.39.156 and MAC address 52:54:00:8a:b7:35 in network mk-embed-certs-565837
	I0120 12:33:49.456222  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHPort
	I0120 12:33:49.456416  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHKeyPath
	I0120 12:33:49.456573  583738 main.go:141] libmachine: (embed-certs-565837) Calling .GetSSHUsername
	I0120 12:33:49.456694  583738 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/embed-certs-565837/id_rsa Username:docker}
	I0120 12:33:49.610352  583738 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:33:49.639367  583738 node_ready.go:35] waiting up to 6m0s for node "embed-certs-565837" to be "Ready" ...
	I0120 12:33:49.676550  583738 node_ready.go:49] node "embed-certs-565837" has status "Ready":"True"
	I0120 12:33:49.676597  583738 node_ready.go:38] duration metric: took 37.194932ms for node "embed-certs-565837" to be "Ready" ...
	I0120 12:33:49.676611  583738 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:33:49.691522  583738 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-42d6j" in "kube-system" namespace to be "Ready" ...
	I0120 12:33:49.770168  583738 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:33:49.772571  583738 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 12:33:49.838443  583738 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0120 12:33:49.838477  583738 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0120 12:33:49.847415  583738 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 12:33:49.847450  583738 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0120 12:33:49.916522  583738 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0120 12:33:49.916559  583738 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0120 12:33:49.957332  583738 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0120 12:33:49.957360  583738 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0120 12:33:50.067552  583738 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 12:33:50.067585  583738 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 12:33:50.076296  583738 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0120 12:33:50.076326  583738 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0120 12:33:50.141469  583738 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 12:33:50.141506  583738 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 12:33:50.225470  583738 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0120 12:33:50.225503  583738 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0120 12:33:50.293593  583738 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 12:33:50.432083  583738 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0120 12:33:50.432113  583738 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0120 12:33:50.608426  583738 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0120 12:33:50.608460  583738 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0120 12:33:50.679814  583738 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0120 12:33:50.679842  583738 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0120 12:33:50.845875  583738 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 12:33:50.845911  583738 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0120 12:33:51.093634  583738 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 12:33:51.385443  583738 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.615225225s)
	I0120 12:33:51.385536  583738 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:51.385556  583738 main.go:141] libmachine: (embed-certs-565837) Calling .Close
	I0120 12:33:51.385464  583738 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.612838781s)
	I0120 12:33:51.385674  583738 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:51.385697  583738 main.go:141] libmachine: (embed-certs-565837) Calling .Close
	I0120 12:33:51.385871  583738 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:51.385889  583738 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:51.385919  583738 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:51.385930  583738 main.go:141] libmachine: (embed-certs-565837) Calling .Close
	I0120 12:33:51.386125  583738 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:51.386178  583738 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:51.386259  583738 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:51.386275  583738 main.go:141] libmachine: (embed-certs-565837) Calling .Close
	I0120 12:33:51.386496  583738 main.go:141] libmachine: (embed-certs-565837) DBG | Closing plugin on server side
	I0120 12:33:51.386496  583738 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:51.386530  583738 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:51.386543  583738 main.go:141] libmachine: (embed-certs-565837) DBG | Closing plugin on server side
	I0120 12:33:51.386580  583738 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:51.386587  583738 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:51.442874  583738 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:51.442906  583738 main.go:141] libmachine: (embed-certs-565837) Calling .Close
	I0120 12:33:51.443290  583738 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:51.443315  583738 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:51.710211  583738 pod_ready.go:93] pod "coredns-668d6bf9bc-42d6j" in "kube-system" namespace has status "Ready":"True"
	I0120 12:33:51.710249  583738 pod_ready.go:82] duration metric: took 2.018682268s for pod "coredns-668d6bf9bc-42d6j" in "kube-system" namespace to be "Ready" ...
	I0120 12:33:51.710264  583738 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-vbpfb" in "kube-system" namespace to be "Ready" ...
	I0120 12:33:52.222136  583738 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.928466868s)
	I0120 12:33:52.222216  583738 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:52.222240  583738 main.go:141] libmachine: (embed-certs-565837) Calling .Close
	I0120 12:33:52.222771  583738 main.go:141] libmachine: (embed-certs-565837) DBG | Closing plugin on server side
	I0120 12:33:52.222812  583738 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:52.222832  583738 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:52.222840  583738 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:52.222846  583738 main.go:141] libmachine: (embed-certs-565837) Calling .Close
	I0120 12:33:52.223218  583738 main.go:141] libmachine: (embed-certs-565837) DBG | Closing plugin on server side
	I0120 12:33:52.223255  583738 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:52.223260  583738 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:52.223269  583738 addons.go:479] Verifying addon metrics-server=true in "embed-certs-565837"
	I0120 12:33:53.237591  583738 pod_ready.go:93] pod "coredns-668d6bf9bc-vbpfb" in "kube-system" namespace has status "Ready":"True"
	I0120 12:33:53.237622  583738 pod_ready.go:82] duration metric: took 1.527349592s for pod "coredns-668d6bf9bc-vbpfb" in "kube-system" namespace to be "Ready" ...
	I0120 12:33:53.237636  583738 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-565837" in "kube-system" namespace to be "Ready" ...
	I0120 12:33:53.293660  583738 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.199955524s)
	I0120 12:33:53.293740  583738 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:53.293762  583738 main.go:141] libmachine: (embed-certs-565837) Calling .Close
	I0120 12:33:53.294130  583738 main.go:141] libmachine: (embed-certs-565837) DBG | Closing plugin on server side
	I0120 12:33:53.294185  583738 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:53.294193  583738 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:53.294202  583738 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:53.294209  583738 main.go:141] libmachine: (embed-certs-565837) Calling .Close
	I0120 12:33:53.294450  583738 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:53.294471  583738 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:53.296524  583738 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-565837 addons enable metrics-server
	
	I0120 12:33:53.298241  583738 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0120 12:33:53.299905  583738 addons.go:514] duration metric: took 3.928508803s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0120 12:33:55.254666  583738 pod_ready.go:103] pod "etcd-embed-certs-565837" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:55.747762  583738 pod_ready.go:93] pod "etcd-embed-certs-565837" in "kube-system" namespace has status "Ready":"True"
	I0120 12:33:55.747805  583738 pod_ready.go:82] duration metric: took 2.510158546s for pod "etcd-embed-certs-565837" in "kube-system" namespace to be "Ready" ...
	I0120 12:33:55.747820  583738 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-565837" in "kube-system" namespace to be "Ready" ...
	I0120 12:33:55.755104  583738 pod_ready.go:93] pod "kube-apiserver-embed-certs-565837" in "kube-system" namespace has status "Ready":"True"
	I0120 12:33:55.755128  583738 pod_ready.go:82] duration metric: took 7.299651ms for pod "kube-apiserver-embed-certs-565837" in "kube-system" namespace to be "Ready" ...
	I0120 12:33:55.755140  583738 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-565837" in "kube-system" namespace to be "Ready" ...
	I0120 12:33:57.388625  583738 pod_ready.go:93] pod "kube-controller-manager-embed-certs-565837" in "kube-system" namespace has status "Ready":"True"
	I0120 12:33:57.388660  583738 pod_ready.go:82] duration metric: took 1.633511482s for pod "kube-controller-manager-embed-certs-565837" in "kube-system" namespace to be "Ready" ...
	I0120 12:33:57.388677  583738 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8zz8b" in "kube-system" namespace to be "Ready" ...
	I0120 12:33:57.397644  583738 pod_ready.go:93] pod "kube-proxy-8zz8b" in "kube-system" namespace has status "Ready":"True"
	I0120 12:33:57.397678  583738 pod_ready.go:82] duration metric: took 8.990904ms for pod "kube-proxy-8zz8b" in "kube-system" namespace to be "Ready" ...
	I0120 12:33:57.397692  583738 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-565837" in "kube-system" namespace to be "Ready" ...
	I0120 12:33:57.905470  583738 pod_ready.go:93] pod "kube-scheduler-embed-certs-565837" in "kube-system" namespace has status "Ready":"True"
	I0120 12:33:57.905506  583738 pod_ready.go:82] duration metric: took 507.803445ms for pod "kube-scheduler-embed-certs-565837" in "kube-system" namespace to be "Ready" ...
	I0120 12:33:57.905519  583738 pod_ready.go:39] duration metric: took 8.228893845s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:33:57.905541  583738 api_server.go:52] waiting for apiserver process to appear ...
	I0120 12:33:57.905607  583738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:57.926736  583738 api_server.go:72] duration metric: took 8.555403242s to wait for apiserver process to appear ...
	I0120 12:33:57.926771  583738 api_server.go:88] waiting for apiserver healthz status ...
	I0120 12:33:57.926797  583738 api_server.go:253] Checking apiserver healthz at https://192.168.39.156:8443/healthz ...
	I0120 12:33:57.933493  583738 api_server.go:279] https://192.168.39.156:8443/healthz returned 200:
	ok
	I0120 12:33:57.935097  583738 api_server.go:141] control plane version: v1.32.0
	I0120 12:33:57.935123  583738 api_server.go:131] duration metric: took 8.343754ms to wait for apiserver health ...
	I0120 12:33:57.935133  583738 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 12:33:57.944413  583738 system_pods.go:59] 9 kube-system pods found
	I0120 12:33:57.944526  583738 system_pods.go:61] "coredns-668d6bf9bc-42d6j" [ff7bf3de-5b3e-46d5-ba3e-da0db59b56f9] Running
	I0120 12:33:57.944550  583738 system_pods.go:61] "coredns-668d6bf9bc-vbpfb" [01ea6835-10f6-487b-a11c-b0902f7ad182] Running
	I0120 12:33:57.944566  583738 system_pods.go:61] "etcd-embed-certs-565837" [8fd36982-f008-4315-a798-297b63ee7199] Running
	I0120 12:33:57.944581  583738 system_pods.go:61] "kube-apiserver-embed-certs-565837" [9ccbc39a-edf5-4bb7-b673-557efe30a4d8] Running
	I0120 12:33:57.944594  583738 system_pods.go:61] "kube-controller-manager-embed-certs-565837" [c7ae8f88-de29-44b3-80c3-199fa94ef356] Running
	I0120 12:33:57.944616  583738 system_pods.go:61] "kube-proxy-8zz8b" [77cb08dd-1882-44ed-80f9-8e343366f01e] Running
	I0120 12:33:57.944634  583738 system_pods.go:61] "kube-scheduler-embed-certs-565837" [f2e7e6ec-03a7-4a40-8204-d5ae6f5da571] Running
	I0120 12:33:57.944653  583738 system_pods.go:61] "metrics-server-f79f97bbb-rv4lr" [9df96932-8f93-4fe2-9802-b0bc37a64f6c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 12:33:57.944671  583738 system_pods.go:61] "storage-provisioner" [617e5846-5d79-4d58-a09a-08bcc3797e4c] Running
	I0120 12:33:57.944690  583738 system_pods.go:74] duration metric: took 9.550166ms to wait for pod list to return data ...
	I0120 12:33:57.944712  583738 default_sa.go:34] waiting for default service account to be created ...
	I0120 12:33:57.949420  583738 default_sa.go:45] found service account: "default"
	I0120 12:33:57.949518  583738 default_sa.go:55] duration metric: took 4.786777ms for default service account to be created ...
	I0120 12:33:57.949545  583738 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 12:33:57.957489  583738 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p embed-certs-565837 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.0": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-565837 -n embed-certs-565837
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-565837 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-565837 logs -n 25: (1.300253688s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-912009 sudo                        | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
	|         | systemctl status kubelet --all                       |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009                             | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
	|         | sudo systemctl cat kubelet                           |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009 sudo                        | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
	|         | journalctl -xeu kubelet --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009                             | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009                             | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009 sudo                        | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC |                     |
	|         | systemctl status docker --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009                             | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
	|         | sudo systemctl cat docker                            |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009 sudo                        | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
	|         | cat /etc/docker/daemon.json                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009 sudo                        | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC |                     |
	|         | docker system info                                   |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009 sudo                        | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC |                     |
	|         | systemctl status cri-docker                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009                             | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
	|         | sudo systemctl cat cri-docker                        |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009 sudo cat                    | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009 sudo cat                    | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009 sudo                        | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009 sudo                        | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
	|         | systemctl status containerd                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009                             | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
	|         | sudo systemctl cat containerd                        |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009 sudo cat                    | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009                             | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009 sudo                        | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
	|         | containerd config dump                               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009 sudo                        | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC |                     |
	|         | systemctl status crio --all                          |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009 sudo                        | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
	|         | systemctl cat crio --no-pager                        |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009 sudo                        | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
	|         | find /etc/crio -type f -exec                         |                       |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-912009 sudo                        | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
	|         | crio config                                          |                       |         |         |                     |                     |
	| delete  | -p custom-flannel-912009                             | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
	| delete  | -p no-preload-677886                                 | no-preload-677886     | jenkins | v1.35.0 | 20 Jan 25 12:50 UTC | 20 Jan 25 12:50 UTC |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 12:34:55
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 12:34:55.317626  593695 out.go:345] Setting OutFile to fd 1 ...
	I0120 12:34:55.318098  593695 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:34:55.318140  593695 out.go:358] Setting ErrFile to fd 2...
	I0120 12:34:55.318166  593695 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:34:55.318820  593695 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-530330/.minikube/bin
	I0120 12:34:55.319727  593695 out.go:352] Setting JSON to false
	I0120 12:34:55.321284  593695 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8237,"bootTime":1737368258,"procs":301,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 12:34:55.321400  593695 start.go:139] virtualization: kvm guest
	I0120 12:34:55.323443  593695 out.go:177] * [custom-flannel-912009] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 12:34:55.325326  593695 out.go:177]   - MINIKUBE_LOCATION=20151
	I0120 12:34:55.325338  593695 notify.go:220] Checking for updates...
	I0120 12:34:55.328258  593695 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 12:34:55.329657  593695 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20151-530330/kubeconfig
	I0120 12:34:55.331093  593695 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-530330/.minikube
	I0120 12:34:55.332440  593695 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 12:34:55.333657  593695 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 12:34:55.335502  593695 config.go:182] Loaded profile config "calico-912009": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 12:34:55.335654  593695 config.go:182] Loaded profile config "embed-certs-565837": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 12:34:55.335772  593695 config.go:182] Loaded profile config "no-preload-677886": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 12:34:55.335906  593695 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 12:34:55.378824  593695 out.go:177] * Using the kvm2 driver based on user configuration
	I0120 12:34:55.380206  593695 start.go:297] selected driver: kvm2
	I0120 12:34:55.380226  593695 start.go:901] validating driver "kvm2" against <nil>
	I0120 12:34:55.380239  593695 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 12:34:55.380924  593695 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:34:55.380997  593695 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20151-530330/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 12:34:55.398891  593695 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 12:34:55.398946  593695 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 12:34:55.399228  593695 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 12:34:55.399267  593695 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0120 12:34:55.399286  593695 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0120 12:34:55.399352  593695 start.go:340] cluster config:
	{Name:custom-flannel-912009 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:custom-flannel-912009 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:34:55.399486  593695 iso.go:125] acquiring lock: {Name:mk734d848ce0e9a68d8d00ecbd0f5085f599b42f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:34:55.402211  593695 out.go:177] * Starting "custom-flannel-912009" primary control-plane node in "custom-flannel-912009" cluster
	I0120 12:34:55.403487  593695 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
	I0120 12:34:55.403526  593695 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20151-530330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-amd64.tar.lz4
	I0120 12:34:55.403534  593695 cache.go:56] Caching tarball of preloaded images
	I0120 12:34:55.403644  593695 preload.go:172] Found /home/jenkins/minikube-integration/20151-530330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0120 12:34:55.403657  593695 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on containerd
	I0120 12:34:55.403760  593695 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/config.json ...
	I0120 12:34:55.403781  593695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/config.json: {Name:mk1f5bd3895f8f37884cdb08f1e892c201dc31bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:34:55.403947  593695 start.go:360] acquireMachinesLock for custom-flannel-912009: {Name:mkcd5f2d114897136dd2343f9fcf468e718657e6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 12:34:55.403984  593695 start.go:364] duration metric: took 19.852µs to acquireMachinesLock for "custom-flannel-912009"
	I0120 12:34:55.404004  593695 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-912009 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:custom-flanne
l-912009 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0120 12:34:55.404078  593695 start.go:125] createHost starting for "" (driver="kvm2")
	I0120 12:34:54.418015  591909 node_ready.go:53] node "calico-912009" has status "Ready":"False"
	I0120 12:34:56.418900  591909 node_ready.go:53] node "calico-912009" has status "Ready":"False"
	I0120 12:34:58.918122  591909 node_ready.go:53] node "calico-912009" has status "Ready":"False"
	I0120 12:34:55.405689  593695 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0120 12:34:55.405857  593695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:34:55.405898  593695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:34:55.421394  593695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46729
	I0120 12:34:55.421940  593695 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:34:55.422589  593695 main.go:141] libmachine: Using API Version  1
	I0120 12:34:55.422629  593695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:34:55.423222  593695 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:34:55.423525  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetMachineName
	I0120 12:34:55.423711  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .DriverName
	I0120 12:34:55.423949  593695 start.go:159] libmachine.API.Create for "custom-flannel-912009" (driver="kvm2")
	I0120 12:34:55.424001  593695 client.go:168] LocalClient.Create starting
	I0120 12:34:55.424053  593695 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca.pem
	I0120 12:34:55.424104  593695 main.go:141] libmachine: Decoding PEM data...
	I0120 12:34:55.424127  593695 main.go:141] libmachine: Parsing certificate...
	I0120 12:34:55.424219  593695 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20151-530330/.minikube/certs/cert.pem
	I0120 12:34:55.424244  593695 main.go:141] libmachine: Decoding PEM data...
	I0120 12:34:55.424262  593695 main.go:141] libmachine: Parsing certificate...
	I0120 12:34:55.424287  593695 main.go:141] libmachine: Running pre-create checks...
	I0120 12:34:55.424305  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .PreCreateCheck
	I0120 12:34:55.424734  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetConfigRaw
	I0120 12:34:55.425305  593695 main.go:141] libmachine: Creating machine...
	I0120 12:34:55.425318  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .Create
	I0120 12:34:55.425495  593695 main.go:141] libmachine: (custom-flannel-912009) creating KVM machine...
	I0120 12:34:55.425519  593695 main.go:141] libmachine: (custom-flannel-912009) creating network...
	I0120 12:34:55.426842  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found existing default KVM network
	I0120 12:34:55.428088  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:34:55.427921  593717 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:32:62:a8} reservation:<nil>}
	I0120 12:34:55.429366  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:34:55.429267  593717 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001194e0}
	I0120 12:34:55.429388  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | created network xml: 
	I0120 12:34:55.429399  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | <network>
	I0120 12:34:55.429409  593695 main.go:141] libmachine: (custom-flannel-912009) DBG |   <name>mk-custom-flannel-912009</name>
	I0120 12:34:55.429417  593695 main.go:141] libmachine: (custom-flannel-912009) DBG |   <dns enable='no'/>
	I0120 12:34:55.429422  593695 main.go:141] libmachine: (custom-flannel-912009) DBG |   
	I0120 12:34:55.429440  593695 main.go:141] libmachine: (custom-flannel-912009) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0120 12:34:55.429448  593695 main.go:141] libmachine: (custom-flannel-912009) DBG |     <dhcp>
	I0120 12:34:55.429459  593695 main.go:141] libmachine: (custom-flannel-912009) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0120 12:34:55.429475  593695 main.go:141] libmachine: (custom-flannel-912009) DBG |     </dhcp>
	I0120 12:34:55.429487  593695 main.go:141] libmachine: (custom-flannel-912009) DBG |   </ip>
	I0120 12:34:55.429497  593695 main.go:141] libmachine: (custom-flannel-912009) DBG |   
	I0120 12:34:55.429513  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | </network>
	I0120 12:34:55.429524  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | 
	I0120 12:34:55.434573  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | trying to create private KVM network mk-custom-flannel-912009 192.168.50.0/24...
	I0120 12:34:55.523742  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | private KVM network mk-custom-flannel-912009 192.168.50.0/24 created
	I0120 12:34:55.523770  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:34:55.523396  593717 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20151-530330/.minikube
	I0120 12:34:55.523822  593695 main.go:141] libmachine: (custom-flannel-912009) setting up store path in /home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009 ...
	I0120 12:34:55.523855  593695 main.go:141] libmachine: (custom-flannel-912009) building disk image from file:///home/jenkins/minikube-integration/20151-530330/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0120 12:34:55.523992  593695 main.go:141] libmachine: (custom-flannel-912009) Downloading /home/jenkins/minikube-integration/20151-530330/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20151-530330/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0120 12:34:55.815001  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:34:55.814810  593717 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/id_rsa...
	I0120 12:34:56.245898  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:34:56.245727  593717 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/custom-flannel-912009.rawdisk...
	I0120 12:34:56.245930  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | Writing magic tar header
	I0120 12:34:56.245949  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | Writing SSH key tar header
	I0120 12:34:56.245964  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:34:56.245896  593717 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009 ...
	I0120 12:34:56.245994  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009
	I0120 12:34:56.246097  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-530330/.minikube/machines
	I0120 12:34:56.246128  593695 main.go:141] libmachine: (custom-flannel-912009) setting executable bit set on /home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009 (perms=drwx------)
	I0120 12:34:56.246141  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-530330/.minikube
	I0120 12:34:56.246172  593695 main.go:141] libmachine: (custom-flannel-912009) setting executable bit set on /home/jenkins/minikube-integration/20151-530330/.minikube/machines (perms=drwxr-xr-x)
	I0120 12:34:56.246200  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-530330
	I0120 12:34:56.246212  593695 main.go:141] libmachine: (custom-flannel-912009) setting executable bit set on /home/jenkins/minikube-integration/20151-530330/.minikube (perms=drwxr-xr-x)
	I0120 12:34:56.246229  593695 main.go:141] libmachine: (custom-flannel-912009) setting executable bit set on /home/jenkins/minikube-integration/20151-530330 (perms=drwxrwxr-x)
	I0120 12:34:56.246238  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0120 12:34:56.246247  593695 main.go:141] libmachine: (custom-flannel-912009) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0120 12:34:56.246258  593695 main.go:141] libmachine: (custom-flannel-912009) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0120 12:34:56.246265  593695 main.go:141] libmachine: (custom-flannel-912009) creating domain...
	I0120 12:34:56.246277  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | checking permissions on dir: /home/jenkins
	I0120 12:34:56.246285  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | checking permissions on dir: /home
	I0120 12:34:56.246295  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | skipping /home - not owner
	I0120 12:34:56.247428  593695 main.go:141] libmachine: (custom-flannel-912009) define libvirt domain using xml: 
	I0120 12:34:56.247449  593695 main.go:141] libmachine: (custom-flannel-912009) <domain type='kvm'>
	I0120 12:34:56.247459  593695 main.go:141] libmachine: (custom-flannel-912009)   <name>custom-flannel-912009</name>
	I0120 12:34:56.247467  593695 main.go:141] libmachine: (custom-flannel-912009)   <memory unit='MiB'>3072</memory>
	I0120 12:34:56.247482  593695 main.go:141] libmachine: (custom-flannel-912009)   <vcpu>2</vcpu>
	I0120 12:34:56.247493  593695 main.go:141] libmachine: (custom-flannel-912009)   <features>
	I0120 12:34:56.247502  593695 main.go:141] libmachine: (custom-flannel-912009)     <acpi/>
	I0120 12:34:56.247525  593695 main.go:141] libmachine: (custom-flannel-912009)     <apic/>
	I0120 12:34:56.247552  593695 main.go:141] libmachine: (custom-flannel-912009)     <pae/>
	I0120 12:34:56.247575  593695 main.go:141] libmachine: (custom-flannel-912009)     
	I0120 12:34:56.247586  593695 main.go:141] libmachine: (custom-flannel-912009)   </features>
	I0120 12:34:56.247595  593695 main.go:141] libmachine: (custom-flannel-912009)   <cpu mode='host-passthrough'>
	I0120 12:34:56.247606  593695 main.go:141] libmachine: (custom-flannel-912009)   
	I0120 12:34:56.247615  593695 main.go:141] libmachine: (custom-flannel-912009)   </cpu>
	I0120 12:34:56.247625  593695 main.go:141] libmachine: (custom-flannel-912009)   <os>
	I0120 12:34:56.247635  593695 main.go:141] libmachine: (custom-flannel-912009)     <type>hvm</type>
	I0120 12:34:56.247644  593695 main.go:141] libmachine: (custom-flannel-912009)     <boot dev='cdrom'/>
	I0120 12:34:56.247658  593695 main.go:141] libmachine: (custom-flannel-912009)     <boot dev='hd'/>
	I0120 12:34:56.247670  593695 main.go:141] libmachine: (custom-flannel-912009)     <bootmenu enable='no'/>
	I0120 12:34:56.247682  593695 main.go:141] libmachine: (custom-flannel-912009)   </os>
	I0120 12:34:56.247690  593695 main.go:141] libmachine: (custom-flannel-912009)   <devices>
	I0120 12:34:56.247701  593695 main.go:141] libmachine: (custom-flannel-912009)     <disk type='file' device='cdrom'>
	I0120 12:34:56.247717  593695 main.go:141] libmachine: (custom-flannel-912009)       <source file='/home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/boot2docker.iso'/>
	I0120 12:34:56.247732  593695 main.go:141] libmachine: (custom-flannel-912009)       <target dev='hdc' bus='scsi'/>
	I0120 12:34:56.247741  593695 main.go:141] libmachine: (custom-flannel-912009)       <readonly/>
	I0120 12:34:56.247748  593695 main.go:141] libmachine: (custom-flannel-912009)     </disk>
	I0120 12:34:56.247776  593695 main.go:141] libmachine: (custom-flannel-912009)     <disk type='file' device='disk'>
	I0120 12:34:56.247790  593695 main.go:141] libmachine: (custom-flannel-912009)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0120 12:34:56.247828  593695 main.go:141] libmachine: (custom-flannel-912009)       <source file='/home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/custom-flannel-912009.rawdisk'/>
	I0120 12:34:56.247852  593695 main.go:141] libmachine: (custom-flannel-912009)       <target dev='hda' bus='virtio'/>
	I0120 12:34:56.247876  593695 main.go:141] libmachine: (custom-flannel-912009)     </disk>
	I0120 12:34:56.247896  593695 main.go:141] libmachine: (custom-flannel-912009)     <interface type='network'>
	I0120 12:34:56.247910  593695 main.go:141] libmachine: (custom-flannel-912009)       <source network='mk-custom-flannel-912009'/>
	I0120 12:34:56.247921  593695 main.go:141] libmachine: (custom-flannel-912009)       <model type='virtio'/>
	I0120 12:34:56.247932  593695 main.go:141] libmachine: (custom-flannel-912009)     </interface>
	I0120 12:34:56.247939  593695 main.go:141] libmachine: (custom-flannel-912009)     <interface type='network'>
	I0120 12:34:56.247951  593695 main.go:141] libmachine: (custom-flannel-912009)       <source network='default'/>
	I0120 12:34:56.247968  593695 main.go:141] libmachine: (custom-flannel-912009)       <model type='virtio'/>
	I0120 12:34:56.247979  593695 main.go:141] libmachine: (custom-flannel-912009)     </interface>
	I0120 12:34:56.247989  593695 main.go:141] libmachine: (custom-flannel-912009)     <serial type='pty'>
	I0120 12:34:56.247999  593695 main.go:141] libmachine: (custom-flannel-912009)       <target port='0'/>
	I0120 12:34:56.248009  593695 main.go:141] libmachine: (custom-flannel-912009)     </serial>
	I0120 12:34:56.248018  593695 main.go:141] libmachine: (custom-flannel-912009)     <console type='pty'>
	I0120 12:34:56.248033  593695 main.go:141] libmachine: (custom-flannel-912009)       <target type='serial' port='0'/>
	I0120 12:34:56.248044  593695 main.go:141] libmachine: (custom-flannel-912009)     </console>
	I0120 12:34:56.248063  593695 main.go:141] libmachine: (custom-flannel-912009)     <rng model='virtio'>
	I0120 12:34:56.248077  593695 main.go:141] libmachine: (custom-flannel-912009)       <backend model='random'>/dev/random</backend>
	I0120 12:34:56.248087  593695 main.go:141] libmachine: (custom-flannel-912009)     </rng>
	I0120 12:34:56.248098  593695 main.go:141] libmachine: (custom-flannel-912009)     
	I0120 12:34:56.248108  593695 main.go:141] libmachine: (custom-flannel-912009)     
	I0120 12:34:56.248126  593695 main.go:141] libmachine: (custom-flannel-912009)   </devices>
	I0120 12:34:56.248143  593695 main.go:141] libmachine: (custom-flannel-912009) </domain>
	I0120 12:34:56.248157  593695 main.go:141] libmachine: (custom-flannel-912009) 
	I0120 12:34:56.251886  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:5c:75:87 in network default
	I0120 12:34:56.252644  593695 main.go:141] libmachine: (custom-flannel-912009) starting domain...
	I0120 12:34:56.252667  593695 main.go:141] libmachine: (custom-flannel-912009) ensuring networks are active...
	I0120 12:34:56.252679  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:34:56.253478  593695 main.go:141] libmachine: (custom-flannel-912009) Ensuring network default is active
	I0120 12:34:56.253856  593695 main.go:141] libmachine: (custom-flannel-912009) Ensuring network mk-custom-flannel-912009 is active
	I0120 12:34:56.254478  593695 main.go:141] libmachine: (custom-flannel-912009) getting domain XML...
	I0120 12:34:56.255132  593695 main.go:141] libmachine: (custom-flannel-912009) creating domain...
	I0120 12:34:57.617443  593695 main.go:141] libmachine: (custom-flannel-912009) waiting for IP...
	I0120 12:34:57.618468  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:34:57.618975  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
	I0120 12:34:57.619079  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:34:57.618982  593717 retry.go:31] will retry after 310.833975ms: waiting for domain to come up
	I0120 12:34:57.931884  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:34:57.932609  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
	I0120 12:34:57.932671  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:34:57.932587  593717 retry.go:31] will retry after 389.24926ms: waiting for domain to come up
	I0120 12:34:58.323123  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:34:58.323741  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
	I0120 12:34:58.323766  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:34:58.323662  593717 retry.go:31] will retry after 328.51544ms: waiting for domain to come up
	I0120 12:34:58.654475  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:34:58.654999  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
	I0120 12:34:58.655031  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:34:58.654972  593717 retry.go:31] will retry after 459.188002ms: waiting for domain to come up
	I0120 12:34:59.115485  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:34:59.116075  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
	I0120 12:34:59.116099  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:34:59.116039  593717 retry.go:31] will retry after 671.328829ms: waiting for domain to come up
	I0120 12:34:59.788826  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:34:59.789486  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
	I0120 12:34:59.789535  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:34:59.789441  593717 retry.go:31] will retry after 722.417342ms: waiting for domain to come up
	I0120 12:35:00.417246  591909 node_ready.go:49] node "calico-912009" has status "Ready":"True"
	I0120 12:35:00.417269  591909 node_ready.go:38] duration metric: took 8.003348027s for node "calico-912009" to be "Ready" ...
	I0120 12:35:00.417280  591909 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:35:00.427079  591909 pod_ready.go:79] waiting up to 15m0s for pod "calico-kube-controllers-5745477d4d-mz446" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:02.434616  591909 pod_ready.go:103] pod "calico-kube-controllers-5745477d4d-mz446" in "kube-system" namespace has status "Ready":"False"
	I0120 12:35:00.513299  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:00.513926  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
	I0120 12:35:00.513953  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:35:00.513882  593717 retry.go:31] will retry after 1.004102642s: waiting for domain to come up
	I0120 12:35:01.520257  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:01.520856  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
	I0120 12:35:01.520887  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:35:01.520792  593717 retry.go:31] will retry after 1.187548146s: waiting for domain to come up
	I0120 12:35:02.710370  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:02.710926  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
	I0120 12:35:02.710960  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:35:02.710891  593717 retry.go:31] will retry after 1.130666152s: waiting for domain to come up
	I0120 12:35:03.843031  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:03.843591  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
	I0120 12:35:03.843657  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:35:03.843573  593717 retry.go:31] will retry after 2.084857552s: waiting for domain to come up
	I0120 12:35:04.932987  591909 pod_ready.go:103] pod "calico-kube-controllers-5745477d4d-mz446" in "kube-system" namespace has status "Ready":"False"
	I0120 12:35:06.934911  591909 pod_ready.go:103] pod "calico-kube-controllers-5745477d4d-mz446" in "kube-system" namespace has status "Ready":"False"
	I0120 12:35:05.930313  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:05.930995  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
	I0120 12:35:05.931129  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:35:05.931024  593717 retry.go:31] will retry after 2.721943033s: waiting for domain to come up
	I0120 12:35:08.655556  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:08.656095  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
	I0120 12:35:08.656125  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:35:08.656041  593717 retry.go:31] will retry after 3.50397462s: waiting for domain to come up
	I0120 12:35:09.434933  591909 pod_ready.go:103] pod "calico-kube-controllers-5745477d4d-mz446" in "kube-system" namespace has status "Ready":"False"
	I0120 12:35:11.938250  591909 pod_ready.go:103] pod "calico-kube-controllers-5745477d4d-mz446" in "kube-system" namespace has status "Ready":"False"
	I0120 12:35:12.161925  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:12.162527  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
	I0120 12:35:12.162555  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:35:12.162507  593717 retry.go:31] will retry after 4.028021149s: waiting for domain to come up
	I0120 12:35:14.433852  591909 pod_ready.go:103] pod "calico-kube-controllers-5745477d4d-mz446" in "kube-system" namespace has status "Ready":"False"
	I0120 12:35:16.936370  591909 pod_ready.go:93] pod "calico-kube-controllers-5745477d4d-mz446" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:16.936407  591909 pod_ready.go:82] duration metric: took 16.509299944s for pod "calico-kube-controllers-5745477d4d-mz446" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:16.936423  591909 pod_ready.go:79] waiting up to 15m0s for pod "calico-node-58f5q" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:18.944599  591909 pod_ready.go:103] pod "calico-node-58f5q" in "kube-system" namespace has status "Ready":"False"
	I0120 12:35:16.192015  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:16.192673  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
	I0120 12:35:16.192705  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:35:16.192623  593717 retry.go:31] will retry after 4.250339401s: waiting for domain to come up
	I0120 12:35:21.444844  591909 pod_ready.go:103] pod "calico-node-58f5q" in "kube-system" namespace has status "Ready":"False"
	I0120 12:35:23.961659  591909 pod_ready.go:93] pod "calico-node-58f5q" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:23.961686  591909 pod_ready.go:82] duration metric: took 7.025255499s for pod "calico-node-58f5q" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:23.961697  591909 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-qtrbt" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:23.986722  591909 pod_ready.go:93] pod "coredns-668d6bf9bc-qtrbt" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:23.986746  591909 pod_ready.go:82] duration metric: took 25.042668ms for pod "coredns-668d6bf9bc-qtrbt" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:23.986757  591909 pod_ready.go:79] waiting up to 15m0s for pod "etcd-calico-912009" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:23.996405  591909 pod_ready.go:93] pod "etcd-calico-912009" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:23.996431  591909 pod_ready.go:82] duration metric: took 9.66769ms for pod "etcd-calico-912009" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:23.996443  591909 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-calico-912009" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:24.005532  591909 pod_ready.go:93] pod "kube-apiserver-calico-912009" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:24.005568  591909 pod_ready.go:82] duration metric: took 9.117419ms for pod "kube-apiserver-calico-912009" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:24.005586  591909 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-calico-912009" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:24.014286  591909 pod_ready.go:93] pod "kube-controller-manager-calico-912009" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:24.014320  591909 pod_ready.go:82] duration metric: took 8.724239ms for pod "kube-controller-manager-calico-912009" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:24.014336  591909 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-d42xv" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:20.444937  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:20.445623  593695 main.go:141] libmachine: (custom-flannel-912009) found domain IP: 192.168.50.190
	I0120 12:35:20.445652  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has current primary IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:20.445660  593695 main.go:141] libmachine: (custom-flannel-912009) reserving static IP address...
	I0120 12:35:20.446017  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find host DHCP lease matching {name: "custom-flannel-912009", mac: "52:54:00:d9:0c:b1", ip: "192.168.50.190"} in network mk-custom-flannel-912009
	I0120 12:35:20.527289  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | Getting to WaitForSSH function...
	I0120 12:35:20.527318  593695 main.go:141] libmachine: (custom-flannel-912009) reserved static IP address 192.168.50.190 for domain custom-flannel-912009
	I0120 12:35:20.527331  593695 main.go:141] libmachine: (custom-flannel-912009) waiting for SSH...
	I0120 12:35:20.530131  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:20.530494  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009
	I0120 12:35:20.530526  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find defined IP address of network mk-custom-flannel-912009 interface with MAC address 52:54:00:d9:0c:b1
	I0120 12:35:20.530642  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | Using SSH client type: external
	I0120 12:35:20.530670  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | Using SSH private key: /home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/id_rsa (-rw-------)
	I0120 12:35:20.530724  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 12:35:20.530748  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | About to run SSH command:
	I0120 12:35:20.530761  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | exit 0
	I0120 12:35:20.534553  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | SSH cmd err, output: exit status 255: 
	I0120 12:35:20.534581  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0120 12:35:20.534592  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | command : exit 0
	I0120 12:35:20.534604  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | err     : exit status 255
	I0120 12:35:20.534639  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | output  : 
	I0120 12:35:23.534852  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | Getting to WaitForSSH function...
	I0120 12:35:23.537219  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:23.537562  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
	I0120 12:35:23.537593  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:23.537711  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | Using SSH client type: external
	I0120 12:35:23.537734  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | Using SSH private key: /home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/id_rsa (-rw-------)
	I0120 12:35:23.537766  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.190 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 12:35:23.537778  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | About to run SSH command:
	I0120 12:35:23.537786  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | exit 0
	I0120 12:35:23.666504  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | SSH cmd err, output: <nil>: 
	I0120 12:35:23.666844  593695 main.go:141] libmachine: (custom-flannel-912009) KVM machine creation complete
	I0120 12:35:23.667202  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetConfigRaw
	I0120 12:35:23.667966  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .DriverName
	I0120 12:35:23.668197  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .DriverName
	I0120 12:35:23.668360  593695 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0120 12:35:23.668377  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetState
	I0120 12:35:23.670153  593695 main.go:141] libmachine: Detecting operating system of created instance...
	I0120 12:35:23.670169  593695 main.go:141] libmachine: Waiting for SSH to be available...
	I0120 12:35:23.670175  593695 main.go:141] libmachine: Getting to WaitForSSH function...
	I0120 12:35:23.670181  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHHostname
	I0120 12:35:23.673109  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:23.673528  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
	I0120 12:35:23.673551  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:23.673837  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHPort
	I0120 12:35:23.674105  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
	I0120 12:35:23.674329  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
	I0120 12:35:23.674532  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHUsername
	I0120 12:35:23.674693  593695 main.go:141] libmachine: Using SSH client type: native
	I0120 12:35:23.674971  593695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.190 22 <nil> <nil>}
	I0120 12:35:23.674989  593695 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0120 12:35:23.781486  593695 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 12:35:23.781512  593695 main.go:141] libmachine: Detecting the provisioner...
	I0120 12:35:23.781520  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHHostname
	I0120 12:35:23.784548  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:23.785046  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
	I0120 12:35:23.785077  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:23.785303  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHPort
	I0120 12:35:23.785511  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
	I0120 12:35:23.785694  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
	I0120 12:35:23.785856  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHUsername
	I0120 12:35:23.786038  593695 main.go:141] libmachine: Using SSH client type: native
	I0120 12:35:23.786249  593695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.190 22 <nil> <nil>}
	I0120 12:35:23.786263  593695 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0120 12:35:23.895060  593695 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0120 12:35:23.895164  593695 main.go:141] libmachine: found compatible host: buildroot
	I0120 12:35:23.895185  593695 main.go:141] libmachine: Provisioning with buildroot...
	I0120 12:35:23.895198  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetMachineName
	I0120 12:35:23.895470  593695 buildroot.go:166] provisioning hostname "custom-flannel-912009"
	I0120 12:35:23.895510  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetMachineName
	I0120 12:35:23.895752  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHHostname
	I0120 12:35:23.899661  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:23.900121  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
	I0120 12:35:23.900148  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:23.900337  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHPort
	I0120 12:35:23.900565  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
	I0120 12:35:23.900738  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
	I0120 12:35:23.900892  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHUsername
	I0120 12:35:23.901167  593695 main.go:141] libmachine: Using SSH client type: native
	I0120 12:35:23.901402  593695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.190 22 <nil> <nil>}
	I0120 12:35:23.901418  593695 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-912009 && echo "custom-flannel-912009" | sudo tee /etc/hostname
	I0120 12:35:24.029708  593695 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-912009
	
	I0120 12:35:24.029744  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHHostname
	I0120 12:35:24.033017  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.033445  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
	I0120 12:35:24.033478  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.033777  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHPort
	I0120 12:35:24.034045  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
	I0120 12:35:24.034311  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
	I0120 12:35:24.034484  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHUsername
	I0120 12:35:24.034713  593695 main.go:141] libmachine: Using SSH client type: native
	I0120 12:35:24.034960  593695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.190 22 <nil> <nil>}
	I0120 12:35:24.034989  593695 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-912009' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-912009/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-912009' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 12:35:24.155682  593695 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 12:35:24.155719  593695 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20151-530330/.minikube CaCertPath:/home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20151-530330/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20151-530330/.minikube}
	I0120 12:35:24.155742  593695 buildroot.go:174] setting up certificates
	I0120 12:35:24.155752  593695 provision.go:84] configureAuth start
	I0120 12:35:24.155761  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetMachineName
	I0120 12:35:24.156072  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetIP
	I0120 12:35:24.159246  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.159526  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
	I0120 12:35:24.159559  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.159719  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHHostname
	I0120 12:35:24.162295  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.162595  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
	I0120 12:35:24.162622  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.162796  593695 provision.go:143] copyHostCerts
	I0120 12:35:24.162871  593695 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-530330/.minikube/key.pem, removing ...
	I0120 12:35:24.162897  593695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-530330/.minikube/key.pem
	I0120 12:35:24.163012  593695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20151-530330/.minikube/key.pem (1675 bytes)
	I0120 12:35:24.163166  593695 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-530330/.minikube/ca.pem, removing ...
	I0120 12:35:24.163182  593695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-530330/.minikube/ca.pem
	I0120 12:35:24.163224  593695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20151-530330/.minikube/ca.pem (1078 bytes)
	I0120 12:35:24.163301  593695 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-530330/.minikube/cert.pem, removing ...
	I0120 12:35:24.163311  593695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-530330/.minikube/cert.pem
	I0120 12:35:24.163352  593695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20151-530330/.minikube/cert.pem (1123 bytes)
	I0120 12:35:24.163530  593695 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20151-530330/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-912009 san=[127.0.0.1 192.168.50.190 custom-flannel-912009 localhost minikube]
	I0120 12:35:24.241848  593695 provision.go:177] copyRemoteCerts
	I0120 12:35:24.241916  593695 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 12:35:24.241950  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHHostname
	I0120 12:35:24.244770  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.245114  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
	I0120 12:35:24.245138  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.245331  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHPort
	I0120 12:35:24.245514  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
	I0120 12:35:24.245668  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHUsername
	I0120 12:35:24.245760  593695 sshutil.go:53] new ssh client: &{IP:192.168.50.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/id_rsa Username:docker}
	I0120 12:35:24.332818  593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0120 12:35:24.361699  593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0120 12:35:24.391399  593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0120 12:35:24.418431  593695 provision.go:87] duration metric: took 262.665168ms to configureAuth
	I0120 12:35:24.418473  593695 buildroot.go:189] setting minikube options for container-runtime
	I0120 12:35:24.418753  593695 config.go:182] Loaded profile config "custom-flannel-912009": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 12:35:24.418792  593695 main.go:141] libmachine: Checking connection to Docker...
	I0120 12:35:24.418805  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetURL
	I0120 12:35:24.420068  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | using libvirt version 6000000
	I0120 12:35:24.422715  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.423162  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
	I0120 12:35:24.423190  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.423456  593695 main.go:141] libmachine: Docker is up and running!
	I0120 12:35:24.423476  593695 main.go:141] libmachine: Reticulating splines...
	I0120 12:35:24.423486  593695 client.go:171] duration metric: took 28.999470441s to LocalClient.Create
	I0120 12:35:24.423515  593695 start.go:167] duration metric: took 28.999566096s to libmachine.API.Create "custom-flannel-912009"
	I0120 12:35:24.423528  593695 start.go:293] postStartSetup for "custom-flannel-912009" (driver="kvm2")
	I0120 12:35:24.423542  593695 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 12:35:24.423569  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .DriverName
	I0120 12:35:24.423829  593695 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 12:35:24.423855  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHHostname
	I0120 12:35:24.426268  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.426582  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
	I0120 12:35:24.426609  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.426817  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHPort
	I0120 12:35:24.427012  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
	I0120 12:35:24.427219  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHUsername
	I0120 12:35:24.427395  593695 sshutil.go:53] new ssh client: &{IP:192.168.50.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/id_rsa Username:docker}
	I0120 12:35:24.509285  593695 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 12:35:24.513984  593695 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 12:35:24.514016  593695 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-530330/.minikube/addons for local assets ...
	I0120 12:35:24.514091  593695 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-530330/.minikube/files for local assets ...
	I0120 12:35:24.514173  593695 filesync.go:149] local asset: /home/jenkins/minikube-integration/20151-530330/.minikube/files/etc/ssl/certs/5375812.pem -> 5375812.pem in /etc/ssl/certs
	I0120 12:35:24.514260  593695 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 12:35:24.523956  593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/files/etc/ssl/certs/5375812.pem --> /etc/ssl/certs/5375812.pem (1708 bytes)
	I0120 12:35:24.553908  593695 start.go:296] duration metric: took 130.36042ms for postStartSetup
	I0120 12:35:24.553975  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetConfigRaw
	I0120 12:35:24.554680  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetIP
	I0120 12:35:24.557887  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.558360  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
	I0120 12:35:24.558399  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.558632  593695 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/config.json ...
	I0120 12:35:24.558858  593695 start.go:128] duration metric: took 29.154769177s to createHost
	I0120 12:35:24.558884  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHHostname
	I0120 12:35:24.561339  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.561943  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
	I0120 12:35:24.561994  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.562136  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHPort
	I0120 12:35:24.562360  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
	I0120 12:35:24.562560  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
	I0120 12:35:24.562828  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHUsername
	I0120 12:35:24.563024  593695 main.go:141] libmachine: Using SSH client type: native
	I0120 12:35:24.563258  593695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.190 22 <nil> <nil>}
	I0120 12:35:24.563273  593695 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 12:35:24.671152  593695 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737376524.647779402
	
	I0120 12:35:24.671177  593695 fix.go:216] guest clock: 1737376524.647779402
	I0120 12:35:24.671187  593695 fix.go:229] Guest: 2025-01-20 12:35:24.647779402 +0000 UTC Remote: 2025-01-20 12:35:24.558871919 +0000 UTC m=+29.288117911 (delta=88.907483ms)
	I0120 12:35:24.671208  593695 fix.go:200] guest clock delta is within tolerance: 88.907483ms
	I0120 12:35:24.671213  593695 start.go:83] releasing machines lock for "custom-flannel-912009", held for 29.26722146s
	I0120 12:35:24.671257  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .DriverName
	I0120 12:35:24.671597  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetIP
	I0120 12:35:24.674668  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.675144  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
	I0120 12:35:24.675179  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.675303  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .DriverName
	I0120 12:35:24.675888  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .DriverName
	I0120 12:35:24.676102  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .DriverName
	I0120 12:35:24.676270  593695 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 12:35:24.676339  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHHostname
	I0120 12:35:24.676389  593695 ssh_runner.go:195] Run: cat /version.json
	I0120 12:35:24.676418  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHHostname
	I0120 12:35:24.679423  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.679453  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.679849  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
	I0120 12:35:24.679890  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
	I0120 12:35:24.679912  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.679941  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:24.680114  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHPort
	I0120 12:35:24.680284  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHPort
	I0120 12:35:24.680292  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
	I0120 12:35:24.680454  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHUsername
	I0120 12:35:24.680472  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
	I0120 12:35:24.680601  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHUsername
	I0120 12:35:24.680657  593695 sshutil.go:53] new ssh client: &{IP:192.168.50.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/id_rsa Username:docker}
	I0120 12:35:24.680719  593695 sshutil.go:53] new ssh client: &{IP:192.168.50.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/id_rsa Username:docker}
	I0120 12:35:24.767818  593695 ssh_runner.go:195] Run: systemctl --version
	I0120 12:35:24.795757  593695 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 12:35:24.801932  593695 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 12:35:24.802005  593695 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 12:35:24.822047  593695 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 12:35:24.822074  593695 start.go:495] detecting cgroup driver to use...
	I0120 12:35:24.822147  593695 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0120 12:35:24.853585  593695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0120 12:35:24.869225  593695 docker.go:217] disabling cri-docker service (if available) ...
	I0120 12:35:24.869302  593695 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 12:35:24.883816  593695 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 12:35:24.897972  593695 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 12:35:25.028005  593695 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 12:35:25.171259  593695 docker.go:233] disabling docker service ...
	I0120 12:35:25.171345  593695 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 12:35:25.187813  593695 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 12:35:25.201348  593695 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 12:35:24.343295  591909 pod_ready.go:93] pod "kube-proxy-d42xv" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:24.343328  591909 pod_ready.go:82] duration metric: took 328.982488ms for pod "kube-proxy-d42xv" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:24.343343  591909 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-calico-912009" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:24.741158  591909 pod_ready.go:93] pod "kube-scheduler-calico-912009" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:24.741188  591909 pod_ready.go:82] duration metric: took 397.835554ms for pod "kube-scheduler-calico-912009" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:24.741204  591909 pod_ready.go:39] duration metric: took 24.323905541s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:35:24.741225  591909 api_server.go:52] waiting for apiserver process to appear ...
	I0120 12:35:24.741287  591909 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:24.758948  591909 api_server.go:72] duration metric: took 33.170230566s to wait for apiserver process to appear ...
	I0120 12:35:24.758984  591909 api_server.go:88] waiting for apiserver healthz status ...
	I0120 12:35:24.759013  591909 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8443/healthz ...
	I0120 12:35:24.763591  591909 api_server.go:279] https://192.168.61.244:8443/healthz returned 200:
	ok
	I0120 12:35:24.764729  591909 api_server.go:141] control plane version: v1.32.0
	I0120 12:35:24.764761  591909 api_server.go:131] duration metric: took 5.768981ms to wait for apiserver health ...
	I0120 12:35:24.764772  591909 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 12:35:24.947474  591909 system_pods.go:59] 9 kube-system pods found
	I0120 12:35:24.947535  591909 system_pods.go:61] "calico-kube-controllers-5745477d4d-mz446" [84466c15-f6c8-4e5e-9e75-a9f5712ec8e6] Running
	I0120 12:35:24.947545  591909 system_pods.go:61] "calico-node-58f5q" [4c659cf9-7e8b-4f9e-a251-005a41562c7c] Running
	I0120 12:35:24.947551  591909 system_pods.go:61] "coredns-668d6bf9bc-qtrbt" [2bf73e76-3e51-4775-931e-49299625214f] Running
	I0120 12:35:24.947555  591909 system_pods.go:61] "etcd-calico-912009" [39631069-4624-4ede-8433-ccc68d866eaa] Running
	I0120 12:35:24.947560  591909 system_pods.go:61] "kube-apiserver-calico-912009" [50d0f21d-f92e-4c26-8dfc-e37ed39827cb] Running
	I0120 12:35:24.947565  591909 system_pods.go:61] "kube-controller-manager-calico-912009" [1f3aef6d-59c0-4413-aa4e-6e23c8881f78] Running
	I0120 12:35:24.947570  591909 system_pods.go:61] "kube-proxy-d42xv" [3d24c7d5-50b1-4871-bc05-74fd339a3e0b] Running
	I0120 12:35:24.947574  591909 system_pods.go:61] "kube-scheduler-calico-912009" [927218e7-10b5-472b-accc-e139302981f3] Running
	I0120 12:35:24.947579  591909 system_pods.go:61] "storage-provisioner" [2124f06a-3841-4d00-85f3-6c7001d3d30d] Running
	I0120 12:35:24.947587  591909 system_pods.go:74] duration metric: took 182.808552ms to wait for pod list to return data ...
	I0120 12:35:24.947598  591909 default_sa.go:34] waiting for default service account to be created ...
	I0120 12:35:25.141030  591909 default_sa.go:45] found service account: "default"
	I0120 12:35:25.141064  591909 default_sa.go:55] duration metric: took 193.459842ms for default service account to be created ...
	I0120 12:35:25.141074  591909 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 12:35:25.345280  591909 system_pods.go:87] 9 kube-system pods found
	I0120 12:35:25.541923  591909 system_pods.go:105] "calico-kube-controllers-5745477d4d-mz446" [84466c15-f6c8-4e5e-9e75-a9f5712ec8e6] Running
	I0120 12:35:25.541949  591909 system_pods.go:105] "calico-node-58f5q" [4c659cf9-7e8b-4f9e-a251-005a41562c7c] Running
	I0120 12:35:25.541955  591909 system_pods.go:105] "coredns-668d6bf9bc-qtrbt" [2bf73e76-3e51-4775-931e-49299625214f] Running
	I0120 12:35:25.541960  591909 system_pods.go:105] "etcd-calico-912009" [39631069-4624-4ede-8433-ccc68d866eaa] Running
	I0120 12:35:25.541965  591909 system_pods.go:105] "kube-apiserver-calico-912009" [50d0f21d-f92e-4c26-8dfc-e37ed39827cb] Running
	I0120 12:35:25.541969  591909 system_pods.go:105] "kube-controller-manager-calico-912009" [1f3aef6d-59c0-4413-aa4e-6e23c8881f78] Running
	I0120 12:35:25.541974  591909 system_pods.go:105] "kube-proxy-d42xv" [3d24c7d5-50b1-4871-bc05-74fd339a3e0b] Running
	I0120 12:35:25.541981  591909 system_pods.go:105] "kube-scheduler-calico-912009" [927218e7-10b5-472b-accc-e139302981f3] Running
	I0120 12:35:25.541993  591909 system_pods.go:105] "storage-provisioner" [2124f06a-3841-4d00-85f3-6c7001d3d30d] Running
	I0120 12:35:25.542005  591909 system_pods.go:147] duration metric: took 400.9237ms to wait for k8s-apps to be running ...
	I0120 12:35:25.542022  591909 system_svc.go:44] waiting for kubelet service to be running ....
	I0120 12:35:25.542076  591909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 12:35:25.559267  591909 system_svc.go:56] duration metric: took 17.236172ms WaitForService to wait for kubelet
	I0120 12:35:25.559301  591909 kubeadm.go:582] duration metric: took 33.970593024s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 12:35:25.559343  591909 node_conditions.go:102] verifying NodePressure condition ...
	I0120 12:35:25.741320  591909 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 12:35:25.741363  591909 node_conditions.go:123] node cpu capacity is 2
	I0120 12:35:25.741379  591909 node_conditions.go:105] duration metric: took 182.030441ms to run NodePressure ...
	I0120 12:35:25.741395  591909 start.go:241] waiting for startup goroutines ...
	I0120 12:35:25.741405  591909 start.go:246] waiting for cluster config update ...
	I0120 12:35:25.741426  591909 start.go:255] writing updated cluster config ...
	I0120 12:35:25.798226  591909 ssh_runner.go:195] Run: rm -f paused
	I0120 12:35:25.864008  591909 start.go:600] kubectl: 1.32.1, cluster: 1.32.0 (minor skew: 0)
	I0120 12:35:25.935661  591909 out.go:177] * Done! kubectl is now configured to use "calico-912009" cluster and "default" namespace by default
	I0120 12:35:25.355950  593695 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 12:35:25.488046  593695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 12:35:25.503617  593695 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 12:35:25.524909  593695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0120 12:35:25.535904  593695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0120 12:35:25.548267  593695 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0120 12:35:25.548339  593695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0120 12:35:25.559155  593695 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0120 12:35:25.569907  593695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0120 12:35:25.581371  593695 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0120 12:35:25.593457  593695 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 12:35:25.605028  593695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0120 12:35:25.617300  593695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0120 12:35:25.629598  593695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0120 12:35:25.641451  593695 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 12:35:25.653746  593695 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 12:35:25.653896  593695 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 12:35:25.669029  593695 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 12:35:25.682069  593695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:35:25.826095  593695 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0120 12:35:25.865783  593695 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0120 12:35:25.865871  593695 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0120 12:35:25.871185  593695 retry.go:31] will retry after 1.23432325s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0120 12:35:27.105977  593695 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0120 12:35:27.111951  593695 start.go:563] Will wait 60s for crictl version
	I0120 12:35:27.112034  593695 ssh_runner.go:195] Run: which crictl
	I0120 12:35:27.116737  593695 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 12:35:27.161217  593695 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0120 12:35:27.161291  593695 ssh_runner.go:195] Run: containerd --version
	I0120 12:35:27.190230  593695 ssh_runner.go:195] Run: containerd --version
	I0120 12:35:27.219481  593695 out.go:177] * Preparing Kubernetes v1.32.0 on containerd 1.7.23 ...
	I0120 12:35:27.220968  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetIP
	I0120 12:35:27.223799  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:27.224137  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
	I0120 12:35:27.224161  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:27.224394  593695 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0120 12:35:27.228599  593695 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 12:35:27.242027  593695 kubeadm.go:883] updating cluster {Name:custom-flannel-912009 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:custom-flannel-912009 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.50.190 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 12:35:27.242166  593695 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
	I0120 12:35:27.242266  593695 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:35:27.280733  593695 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0120 12:35:27.280808  593695 ssh_runner.go:195] Run: which lz4
	I0120 12:35:27.285414  593695 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 12:35:27.290608  593695 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 12:35:27.290637  593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (398081533 bytes)
	I0120 12:35:28.842033  593695 containerd.go:563] duration metric: took 1.556664096s to copy over tarball
	I0120 12:35:28.842105  593695 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 12:35:31.289395  593695 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.44725613s)
	I0120 12:35:31.289429  593695 containerd.go:570] duration metric: took 2.44736643s to extract the tarball
	I0120 12:35:31.289440  593695 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 12:35:31.333681  593695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:35:31.450015  593695 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0120 12:35:31.481159  593695 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:35:31.540445  593695 retry.go:31] will retry after 180.029348ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-20T12:35:31Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0120 12:35:31.720933  593695 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:35:31.764494  593695 containerd.go:627] all images are preloaded for containerd runtime.
	I0120 12:35:31.764524  593695 cache_images.go:84] Images are preloaded, skipping loading
	I0120 12:35:31.764532  593695 kubeadm.go:934] updating node { 192.168.50.190 8443 v1.32.0 containerd true true} ...
	I0120 12:35:31.764644  593695 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=custom-flannel-912009 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.190
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:custom-flannel-912009 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I0120 12:35:31.764699  593695 ssh_runner.go:195] Run: sudo crictl info
	I0120 12:35:31.801010  593695 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0120 12:35:31.801048  593695 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 12:35:31.801070  593695 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.190 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-912009 NodeName:custom-flannel-912009 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.190"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.190 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 12:35:31.801206  593695 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.190
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "custom-flannel-912009"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.190"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.190"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 12:35:31.801295  593695 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 12:35:31.812630  593695 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 12:35:31.812728  593695 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 12:35:31.823817  593695 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0120 12:35:31.842930  593695 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 12:35:31.861044  593695 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2317 bytes)
	I0120 12:35:31.880051  593695 ssh_runner.go:195] Run: grep 192.168.50.190	control-plane.minikube.internal$ /etc/hosts
	I0120 12:35:31.884576  593695 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.190	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 12:35:31.898346  593695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:35:32.028778  593695 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:35:32.052796  593695 certs.go:68] Setting up /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009 for IP: 192.168.50.190
	I0120 12:35:32.052827  593695 certs.go:194] generating shared ca certs ...
	I0120 12:35:32.052845  593695 certs.go:226] acquiring lock for ca certs: {Name:mk52c62007c989bdf47cf8ee68bb49e4d4d8996b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:35:32.053075  593695 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20151-530330/.minikube/ca.key
	I0120 12:35:32.053147  593695 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20151-530330/.minikube/proxy-client-ca.key
	I0120 12:35:32.053163  593695 certs.go:256] generating profile certs ...
	I0120 12:35:32.053247  593695 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/client.key
	I0120 12:35:32.053279  593695 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/client.crt with IP's: []
	I0120 12:35:32.452867  593695 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/client.crt ...
	I0120 12:35:32.452901  593695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/client.crt: {Name:mk835ad9719695d1ab06cc7c134d449ff4a8ec37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:35:32.453073  593695 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/client.key ...
	I0120 12:35:32.453086  593695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/client.key: {Name:mk5dcd2ed981e6e4fa3ffc179551607c1e7c7c35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:35:32.460567  593695 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/apiserver.key.77137fdc
	I0120 12:35:32.460603  593695 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/apiserver.crt.77137fdc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.190]
	I0120 12:35:32.709471  593695 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/apiserver.crt.77137fdc ...
	I0120 12:35:32.709507  593695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/apiserver.crt.77137fdc: {Name:mkecfe0edd1856a9b879cb97ff718bab280ced2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:35:32.709699  593695 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/apiserver.key.77137fdc ...
	I0120 12:35:32.709716  593695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/apiserver.key.77137fdc: {Name:mk6d882a97424f5468af12647844aaa949a2932d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:35:32.709838  593695 certs.go:381] copying /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/apiserver.crt.77137fdc -> /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/apiserver.crt
	I0120 12:35:32.709950  593695 certs.go:385] copying /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/apiserver.key.77137fdc -> /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/apiserver.key
	I0120 12:35:32.710022  593695 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/proxy-client.key
	I0120 12:35:32.710036  593695 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/proxy-client.crt with IP's: []
	I0120 12:35:33.008294  593695 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/proxy-client.crt ...
	I0120 12:35:33.008328  593695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/proxy-client.crt: {Name:mk49acca2ab8ab3a30e85bb0e3b8b16095040d06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:35:33.008501  593695 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/proxy-client.key ...
	I0120 12:35:33.008514  593695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/proxy-client.key: {Name:mkc4e59c474ddf1c18711f46c3fda8af2d43d2f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:35:33.008678  593695 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/537581.pem (1338 bytes)
	W0120 12:35:33.008717  593695 certs.go:480] ignoring /home/jenkins/minikube-integration/20151-530330/.minikube/certs/537581_empty.pem, impossibly tiny 0 bytes
	I0120 12:35:33.008726  593695 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca-key.pem (1679 bytes)
	I0120 12:35:33.008747  593695 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca.pem (1078 bytes)
	I0120 12:35:33.008801  593695 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/cert.pem (1123 bytes)
	I0120 12:35:33.008830  593695 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/key.pem (1675 bytes)
	I0120 12:35:33.008869  593695 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-530330/.minikube/files/etc/ssl/certs/5375812.pem (1708 bytes)
	I0120 12:35:33.009450  593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 12:35:33.037734  593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0120 12:35:33.078488  593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 12:35:33.105293  593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0120 12:35:33.130922  593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0120 12:35:33.156034  593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0120 12:35:33.181145  593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 12:35:33.209991  593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0120 12:35:33.236891  593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 12:35:33.263012  593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/certs/537581.pem --> /usr/share/ca-certificates/537581.pem (1338 bytes)
	I0120 12:35:33.291892  593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/files/etc/ssl/certs/5375812.pem --> /usr/share/ca-certificates/5375812.pem (1708 bytes)
	I0120 12:35:33.320316  593695 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 12:35:33.339826  593695 ssh_runner.go:195] Run: openssl version
	I0120 12:35:33.346196  593695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 12:35:33.360216  593695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:35:33.365369  593695 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 11:16 /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:35:33.365457  593695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:35:33.371913  593695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 12:35:33.384511  593695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537581.pem && ln -fs /usr/share/ca-certificates/537581.pem /etc/ssl/certs/537581.pem"
	I0120 12:35:33.396943  593695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537581.pem
	I0120 12:35:33.402006  593695 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 11:24 /usr/share/ca-certificates/537581.pem
	I0120 12:35:33.402094  593695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537581.pem
	I0120 12:35:33.408421  593695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537581.pem /etc/ssl/certs/51391683.0"
	I0120 12:35:33.422913  593695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5375812.pem && ln -fs /usr/share/ca-certificates/5375812.pem /etc/ssl/certs/5375812.pem"
	I0120 12:35:33.446953  593695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5375812.pem
	I0120 12:35:33.460154  593695 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 11:24 /usr/share/ca-certificates/5375812.pem
	I0120 12:35:33.460243  593695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5375812.pem
	I0120 12:35:33.473049  593695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5375812.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 12:35:33.494370  593695 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 12:35:33.499833  593695 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0120 12:35:33.499899  593695 kubeadm.go:392] StartCluster: {Name:custom-flannel-912009 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:custom-flannel-912009 Namespace:def
ault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.50.190 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:35:33.500002  593695 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0120 12:35:33.500097  593695 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 12:35:33.554921  593695 cri.go:89] found id: ""
	I0120 12:35:33.555004  593695 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 12:35:33.567155  593695 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 12:35:33.579445  593695 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:35:33.597705  593695 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:35:33.597735  593695 kubeadm.go:157] found existing configuration files:
	
	I0120 12:35:33.597796  593695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 12:35:33.610082  593695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:35:33.610143  593695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:35:33.620572  593695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 12:35:33.630336  593695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:35:33.630477  593695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:35:33.642367  593695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 12:35:33.654203  593695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:35:33.654285  593695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:35:33.666300  593695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 12:35:33.678958  593695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:35:33.679034  593695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:35:33.690383  593695 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 12:35:33.751799  593695 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 12:35:33.751856  593695 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 12:35:33.868316  593695 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 12:35:33.868495  593695 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 12:35:33.868635  593695 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 12:35:33.878015  593695 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 12:35:33.880879  593695 out.go:235]   - Generating certificates and keys ...
	I0120 12:35:33.880991  593695 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 12:35:33.881075  593695 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 12:35:34.118211  593695 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0120 12:35:34.268264  593695 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0120 12:35:34.395094  593695 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0120 12:35:34.615258  593695 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0120 12:35:34.840828  593695 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0120 12:35:34.841049  593695 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-912009 localhost] and IPs [192.168.50.190 127.0.0.1 ::1]
	I0120 12:35:34.980318  593695 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0120 12:35:34.980559  593695 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-912009 localhost] and IPs [192.168.50.190 127.0.0.1 ::1]
	I0120 12:35:35.340147  593695 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0120 12:35:35.661731  593695 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0120 12:35:35.819536  593695 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0120 12:35:35.819789  593695 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 12:35:36.025686  593695 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 12:35:36.151576  593695 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 12:35:36.213677  593695 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 12:35:36.370255  593695 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 12:35:36.699839  593695 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 12:35:36.702474  593695 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 12:35:36.706508  593695 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 12:35:36.708260  593695 out.go:235]   - Booting up control plane ...
	I0120 12:35:36.708404  593695 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 12:35:36.708515  593695 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 12:35:36.708618  593695 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 12:35:36.727916  593695 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 12:35:36.734985  593695 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 12:35:36.735050  593695 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 12:35:36.891554  593695 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 12:35:36.891696  593695 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 12:35:37.892390  593695 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001463848s
	I0120 12:35:37.892535  593695 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 12:35:42.892060  593695 kubeadm.go:310] [api-check] The API server is healthy after 5.002045649s
	I0120 12:35:42.907359  593695 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 12:35:42.923769  593695 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 12:35:42.947405  593695 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 12:35:42.947611  593695 kubeadm.go:310] [mark-control-plane] Marking the node custom-flannel-912009 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 12:35:42.957385  593695 kubeadm.go:310] [bootstrap-token] Using token: pwfscc.y1n10nfegb7ld7mi
	I0120 12:35:42.958829  593695 out.go:235]   - Configuring RBAC rules ...
	I0120 12:35:42.958983  593695 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 12:35:42.963002  593695 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 12:35:42.972421  593695 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 12:35:42.976005  593695 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 12:35:42.981865  593695 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 12:35:42.985056  593695 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 12:35:43.299543  593695 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 12:35:43.743871  593695 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 12:35:44.299948  593695 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 12:35:44.304043  593695 kubeadm.go:310] 
	I0120 12:35:44.304135  593695 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 12:35:44.304148  593695 kubeadm.go:310] 
	I0120 12:35:44.304271  593695 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 12:35:44.304306  593695 kubeadm.go:310] 
	I0120 12:35:44.304374  593695 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 12:35:44.304467  593695 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 12:35:44.304538  593695 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 12:35:44.304551  593695 kubeadm.go:310] 
	I0120 12:35:44.304616  593695 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 12:35:44.304627  593695 kubeadm.go:310] 
	I0120 12:35:44.304689  593695 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 12:35:44.304699  593695 kubeadm.go:310] 
	I0120 12:35:44.304767  593695 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 12:35:44.304884  593695 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 12:35:44.304988  593695 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 12:35:44.305012  593695 kubeadm.go:310] 
	I0120 12:35:44.305132  593695 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 12:35:44.305245  593695 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 12:35:44.305260  593695 kubeadm.go:310] 
	I0120 12:35:44.305359  593695 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pwfscc.y1n10nfegb7ld7mi \
	I0120 12:35:44.305494  593695 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:326640d5f51fa6eddf3fd6f2b38f5a08d4608620129e8898d45359839be856c3 \
	I0120 12:35:44.305524  593695 kubeadm.go:310] 	--control-plane 
	I0120 12:35:44.305529  593695 kubeadm.go:310] 
	I0120 12:35:44.305630  593695 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 12:35:44.305636  593695 kubeadm.go:310] 
	I0120 12:35:44.305725  593695 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pwfscc.y1n10nfegb7ld7mi \
	I0120 12:35:44.305865  593695 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:326640d5f51fa6eddf3fd6f2b38f5a08d4608620129e8898d45359839be856c3 
	I0120 12:35:44.309010  593695 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 12:35:44.309072  593695 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0120 12:35:44.311925  593695 out.go:177] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I0120 12:35:44.313463  593695 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.0/kubectl ...
	I0120 12:35:44.313529  593695 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I0120 12:35:44.319726  593695 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I0120 12:35:44.319758  593695 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I0120 12:35:44.351216  593695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0120 12:35:44.868640  593695 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 12:35:44.868740  593695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:44.868782  593695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-912009 minikube.k8s.io/updated_at=2025_01_20T12_35_44_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=77d80cf1517f5f1439721b28711982314b21bec9 minikube.k8s.io/name=custom-flannel-912009 minikube.k8s.io/primary=true
	I0120 12:35:45.116669  593695 ops.go:34] apiserver oom_adj: -16
	I0120 12:35:45.116816  593695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:45.617431  593695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:46.117712  593695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:46.616896  593695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:47.117662  593695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:47.617183  593695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:48.116968  593695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:48.616887  593695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:48.749904  593695 kubeadm.go:1113] duration metric: took 3.881252521s to wait for elevateKubeSystemPrivileges
	I0120 12:35:48.749953  593695 kubeadm.go:394] duration metric: took 15.250058721s to StartCluster
	I0120 12:35:48.749980  593695 settings.go:142] acquiring lock: {Name:mkbafde306c71e7b8958e2377ddfa5a9e3a59113 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:35:48.750089  593695 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20151-530330/kubeconfig
	I0120 12:35:48.752036  593695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-530330/kubeconfig: {Name:mk113e13541afa8413ea8a359169b0824f5f9ef4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:35:48.752297  593695 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0120 12:35:48.752305  593695 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.190 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0120 12:35:48.752376  593695 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 12:35:48.752503  593695 addons.go:69] Setting storage-provisioner=true in profile "custom-flannel-912009"
	I0120 12:35:48.752529  593695 addons.go:238] Setting addon storage-provisioner=true in "custom-flannel-912009"
	I0120 12:35:48.752553  593695 config.go:182] Loaded profile config "custom-flannel-912009": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 12:35:48.752573  593695 host.go:66] Checking if "custom-flannel-912009" exists ...
	I0120 12:35:48.752614  593695 addons.go:69] Setting default-storageclass=true in profile "custom-flannel-912009"
	I0120 12:35:48.752635  593695 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-912009"
	I0120 12:35:48.753033  593695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:35:48.753071  593695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:48.753077  593695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:35:48.753115  593695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:48.754038  593695 out.go:177] * Verifying Kubernetes components...
	I0120 12:35:48.755543  593695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:35:48.770900  593695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46513
	I0120 12:35:48.770924  593695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46759
	I0120 12:35:48.771512  593695 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:48.771523  593695 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:48.771980  593695 main.go:141] libmachine: Using API Version  1
	I0120 12:35:48.771999  593695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:48.772120  593695 main.go:141] libmachine: Using API Version  1
	I0120 12:35:48.772167  593695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:48.772407  593695 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:48.772581  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetState
	I0120 12:35:48.772694  593695 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:48.773172  593695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:35:48.773221  593695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:48.775953  593695 addons.go:238] Setting addon default-storageclass=true in "custom-flannel-912009"
	I0120 12:35:48.775985  593695 host.go:66] Checking if "custom-flannel-912009" exists ...
	I0120 12:35:48.776217  593695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:35:48.776242  593695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:48.791662  593695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42903
	I0120 12:35:48.791918  593695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46145
	I0120 12:35:48.792260  593695 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:48.792600  593695 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:48.792770  593695 main.go:141] libmachine: Using API Version  1
	I0120 12:35:48.792789  593695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:48.793183  593695 main.go:141] libmachine: Using API Version  1
	I0120 12:35:48.793202  593695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:48.793265  593695 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:48.793756  593695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:35:48.793790  593695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:48.793902  593695 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:48.794308  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetState
	I0120 12:35:48.796179  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .DriverName
	I0120 12:35:48.798629  593695 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:35:48.800337  593695 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:35:48.800353  593695 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 12:35:48.800370  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHHostname
	I0120 12:35:48.803462  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:48.803925  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
	I0120 12:35:48.803956  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:48.804206  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHPort
	I0120 12:35:48.804403  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
	I0120 12:35:48.804565  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHUsername
	I0120 12:35:48.804707  593695 sshutil.go:53] new ssh client: &{IP:192.168.50.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/id_rsa Username:docker}
	I0120 12:35:48.811596  593695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39077
	I0120 12:35:48.811951  593695 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:48.812485  593695 main.go:141] libmachine: Using API Version  1
	I0120 12:35:48.812512  593695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:48.812866  593695 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:48.813065  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetState
	I0120 12:35:48.814819  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .DriverName
	I0120 12:35:48.814988  593695 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 12:35:48.814999  593695 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 12:35:48.815012  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHHostname
	I0120 12:35:48.817477  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:48.817881  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
	I0120 12:35:48.817910  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
	I0120 12:35:48.818198  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHPort
	I0120 12:35:48.818380  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
	I0120 12:35:48.818527  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHUsername
	I0120 12:35:48.818657  593695 sshutil.go:53] new ssh client: &{IP:192.168.50.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/id_rsa Username:docker}
	I0120 12:35:49.140129  593695 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:35:49.140225  593695 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0120 12:35:49.271376  593695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:35:49.277298  593695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 12:35:49.757630  593695 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0120 12:35:49.759580  593695 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-912009" to be "Ready" ...
	I0120 12:35:50.126202  593695 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:50.126240  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .Close
	I0120 12:35:50.126243  593695 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:50.126267  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .Close
	I0120 12:35:50.126553  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | Closing plugin on server side
	I0120 12:35:50.126589  593695 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:50.126596  593695 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:50.126602  593695 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:50.126608  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .Close
	I0120 12:35:50.126719  593695 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:50.126731  593695 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:50.126764  593695 main.go:141] libmachine: (custom-flannel-912009) DBG | Closing plugin on server side
	I0120 12:35:50.126851  593695 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:50.126869  593695 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:50.126891  593695 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:50.126902  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .Close
	I0120 12:35:50.127111  593695 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:50.127122  593695 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:50.137124  593695 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:50.137145  593695 main.go:141] libmachine: (custom-flannel-912009) Calling .Close
	I0120 12:35:50.137540  593695 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:50.137572  593695 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:50.139205  593695 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0120 12:35:50.140687  593695 addons.go:514] duration metric: took 1.388318596s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0120 12:35:50.263249  593695 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-912009" context rescaled to 1 replicas
	I0120 12:35:51.764008  593695 node_ready.go:53] node "custom-flannel-912009" has status "Ready":"False"
	I0120 12:35:53.764278  593695 node_ready.go:53] node "custom-flannel-912009" has status "Ready":"False"
	I0120 12:35:56.267054  593695 node_ready.go:53] node "custom-flannel-912009" has status "Ready":"False"
	I0120 12:35:56.762993  593695 node_ready.go:49] node "custom-flannel-912009" has status "Ready":"True"
	I0120 12:35:56.763021  593695 node_ready.go:38] duration metric: took 7.003409226s for node "custom-flannel-912009" to be "Ready" ...
	I0120 12:35:56.763031  593695 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:35:56.774021  593695 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-zcgzt" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:58.781485  593695 pod_ready.go:103] pod "coredns-668d6bf9bc-zcgzt" in "kube-system" namespace has status "Ready":"False"
	I0120 12:36:01.281717  593695 pod_ready.go:103] pod "coredns-668d6bf9bc-zcgzt" in "kube-system" namespace has status "Ready":"False"
	I0120 12:36:03.281973  593695 pod_ready.go:103] pod "coredns-668d6bf9bc-zcgzt" in "kube-system" namespace has status "Ready":"False"
	I0120 12:36:05.779798  593695 pod_ready.go:103] pod "coredns-668d6bf9bc-zcgzt" in "kube-system" namespace has status "Ready":"False"
	I0120 12:36:07.781018  593695 pod_ready.go:103] pod "coredns-668d6bf9bc-zcgzt" in "kube-system" namespace has status "Ready":"False"
	I0120 12:36:09.781624  593695 pod_ready.go:103] pod "coredns-668d6bf9bc-zcgzt" in "kube-system" namespace has status "Ready":"False"
	I0120 12:36:12.283171  593695 pod_ready.go:93] pod "coredns-668d6bf9bc-zcgzt" in "kube-system" namespace has status "Ready":"True"
	I0120 12:36:12.283202  593695 pod_ready.go:82] duration metric: took 15.509154098s for pod "coredns-668d6bf9bc-zcgzt" in "kube-system" namespace to be "Ready" ...
	I0120 12:36:12.283215  593695 pod_ready.go:79] waiting up to 15m0s for pod "etcd-custom-flannel-912009" in "kube-system" namespace to be "Ready" ...
	I0120 12:36:12.288965  593695 pod_ready.go:93] pod "etcd-custom-flannel-912009" in "kube-system" namespace has status "Ready":"True"
	I0120 12:36:12.288990  593695 pod_ready.go:82] duration metric: took 5.767908ms for pod "etcd-custom-flannel-912009" in "kube-system" namespace to be "Ready" ...
	I0120 12:36:12.289000  593695 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-custom-flannel-912009" in "kube-system" namespace to be "Ready" ...
	I0120 12:36:12.293688  593695 pod_ready.go:93] pod "kube-apiserver-custom-flannel-912009" in "kube-system" namespace has status "Ready":"True"
	I0120 12:36:12.293716  593695 pod_ready.go:82] duration metric: took 4.708111ms for pod "kube-apiserver-custom-flannel-912009" in "kube-system" namespace to be "Ready" ...
	I0120 12:36:12.293729  593695 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-custom-flannel-912009" in "kube-system" namespace to be "Ready" ...
	I0120 12:36:12.297788  593695 pod_ready.go:93] pod "kube-controller-manager-custom-flannel-912009" in "kube-system" namespace has status "Ready":"True"
	I0120 12:36:12.297826  593695 pod_ready.go:82] duration metric: took 4.088036ms for pod "kube-controller-manager-custom-flannel-912009" in "kube-system" namespace to be "Ready" ...
	I0120 12:36:12.297840  593695 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-v6hzk" in "kube-system" namespace to be "Ready" ...
	I0120 12:36:12.301911  593695 pod_ready.go:93] pod "kube-proxy-v6hzk" in "kube-system" namespace has status "Ready":"True"
	I0120 12:36:12.301932  593695 pod_ready.go:82] duration metric: took 4.084396ms for pod "kube-proxy-v6hzk" in "kube-system" namespace to be "Ready" ...
	I0120 12:36:12.301941  593695 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-custom-flannel-912009" in "kube-system" namespace to be "Ready" ...
	I0120 12:36:12.678978  593695 pod_ready.go:93] pod "kube-scheduler-custom-flannel-912009" in "kube-system" namespace has status "Ready":"True"
	I0120 12:36:12.679012  593695 pod_ready.go:82] duration metric: took 377.062726ms for pod "kube-scheduler-custom-flannel-912009" in "kube-system" namespace to be "Ready" ...
	I0120 12:36:12.679029  593695 pod_ready.go:39] duration metric: took 15.915986454s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:36:12.679050  593695 api_server.go:52] waiting for apiserver process to appear ...
	I0120 12:36:12.679114  593695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:36:12.695820  593695 api_server.go:72] duration metric: took 23.943481333s to wait for apiserver process to appear ...
	I0120 12:36:12.695857  593695 api_server.go:88] waiting for apiserver healthz status ...
	I0120 12:36:12.695891  593695 api_server.go:253] Checking apiserver healthz at https://192.168.50.190:8443/healthz ...
	I0120 12:36:12.700809  593695 api_server.go:279] https://192.168.50.190:8443/healthz returned 200:
	ok
	I0120 12:36:12.701918  593695 api_server.go:141] control plane version: v1.32.0
	I0120 12:36:12.701948  593695 api_server.go:131] duration metric: took 6.082216ms to wait for apiserver health ...
	I0120 12:36:12.701958  593695 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 12:36:12.882081  593695 system_pods.go:59] 7 kube-system pods found
	I0120 12:36:12.882124  593695 system_pods.go:61] "coredns-668d6bf9bc-zcgzt" [a4599587-8acf-43f9-a149-178f1cc35aa0] Running
	I0120 12:36:12.882133  593695 system_pods.go:61] "etcd-custom-flannel-912009" [6fb49a98-624e-43ed-850a-8a9c63dd40fc] Running
	I0120 12:36:12.882140  593695 system_pods.go:61] "kube-apiserver-custom-flannel-912009" [4341c7c9-5d0f-4740-a7af-971594286c38] Running
	I0120 12:36:12.882146  593695 system_pods.go:61] "kube-controller-manager-custom-flannel-912009" [0db8a018-592b-4019-a02b-b3565937d695] Running
	I0120 12:36:12.882152  593695 system_pods.go:61] "kube-proxy-v6hzk" [e2019ab7-b2fc-48ac-86d2-c014ff8e07c8] Running
	I0120 12:36:12.882157  593695 system_pods.go:61] "kube-scheduler-custom-flannel-912009" [f739f365-2d5e-45ee-90d9-6e67ba46401a] Running
	I0120 12:36:12.882163  593695 system_pods.go:61] "storage-provisioner" [0f702c35-7c57-44be-aa95-58d0e3c4a56a] Running
	I0120 12:36:12.882171  593695 system_pods.go:74] duration metric: took 180.205562ms to wait for pod list to return data ...
	I0120 12:36:12.882184  593695 default_sa.go:34] waiting for default service account to be created ...
	I0120 12:36:13.078402  593695 default_sa.go:45] found service account: "default"
	I0120 12:36:13.078437  593695 default_sa.go:55] duration metric: took 196.244937ms for default service account to be created ...
	I0120 12:36:13.078449  593695 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 12:36:13.281225  593695 system_pods.go:87] 7 kube-system pods found
	I0120 12:36:13.479438  593695 system_pods.go:105] "coredns-668d6bf9bc-zcgzt" [a4599587-8acf-43f9-a149-178f1cc35aa0] Running
	I0120 12:36:13.479469  593695 system_pods.go:105] "etcd-custom-flannel-912009" [6fb49a98-624e-43ed-850a-8a9c63dd40fc] Running
	I0120 12:36:13.479478  593695 system_pods.go:105] "kube-apiserver-custom-flannel-912009" [4341c7c9-5d0f-4740-a7af-971594286c38] Running
	I0120 12:36:13.479485  593695 system_pods.go:105] "kube-controller-manager-custom-flannel-912009" [0db8a018-592b-4019-a02b-b3565937d695] Running
	I0120 12:36:13.479491  593695 system_pods.go:105] "kube-proxy-v6hzk" [e2019ab7-b2fc-48ac-86d2-c014ff8e07c8] Running
	I0120 12:36:13.479496  593695 system_pods.go:105] "kube-scheduler-custom-flannel-912009" [f739f365-2d5e-45ee-90d9-6e67ba46401a] Running
	I0120 12:36:13.479501  593695 system_pods.go:105] "storage-provisioner" [0f702c35-7c57-44be-aa95-58d0e3c4a56a] Running
	I0120 12:36:13.479511  593695 system_pods.go:147] duration metric: took 401.053197ms to wait for k8s-apps to be running ...
	I0120 12:36:13.479520  593695 system_svc.go:44] waiting for kubelet service to be running ....
	I0120 12:36:13.479592  593695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 12:36:13.495091  593695 system_svc.go:56] duration metric: took 15.558739ms WaitForService to wait for kubelet
	I0120 12:36:13.495133  593695 kubeadm.go:582] duration metric: took 24.742796954s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 12:36:13.495185  593695 node_conditions.go:102] verifying NodePressure condition ...
	I0120 12:36:13.679355  593695 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 12:36:13.679383  593695 node_conditions.go:123] node cpu capacity is 2
	I0120 12:36:13.679395  593695 node_conditions.go:105] duration metric: took 184.200741ms to run NodePressure ...
	I0120 12:36:13.679407  593695 start.go:241] waiting for startup goroutines ...
	I0120 12:36:13.679413  593695 start.go:246] waiting for cluster config update ...
	I0120 12:36:13.679423  593695 start.go:255] writing updated cluster config ...
	I0120 12:36:13.679733  593695 ssh_runner.go:195] Run: rm -f paused
	I0120 12:36:13.731412  593695 start.go:600] kubectl: 1.32.1, cluster: 1.32.0 (minor skew: 0)
	I0120 12:36:13.733373  593695 out.go:177] * Done! kubectl is now configured to use "custom-flannel-912009" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	ee78306d3d5e9       523cad1a4df73       47 seconds ago      Exited              dashboard-metrics-scraper   9                   eb6378f51655f       dashboard-metrics-scraper-86c6bf9756-m655x
	fc7381c6ddecf       07655ddf2eebe       21 minutes ago      Running             kubernetes-dashboard        0                   ae522575eab5f       kubernetes-dashboard-7779f9b69b-shd26
	6086f179ade7e       6e38f40d628db       22 minutes ago      Running             storage-provisioner         0                   2e68d84371c30       storage-provisioner
	e05b244c8d4ad       c69fa2e9cbf5f       22 minutes ago      Running             coredns                     0                   8b3fb0714605e       coredns-668d6bf9bc-vbpfb
	f0fd665d58a57       c69fa2e9cbf5f       22 minutes ago      Running             coredns                     0                   83cad31acf8d3       coredns-668d6bf9bc-42d6j
	fd52f21d362c1       040f9f8aac8cd       22 minutes ago      Running             kube-proxy                  0                   2eeda3841e9b3       kube-proxy-8zz8b
	839d89b28bcc1       a389e107f4ff1       22 minutes ago      Running             kube-scheduler              2                   319de5aa1e278       kube-scheduler-embed-certs-565837
	785d0437bf082       c2e17b8d0f4a3       22 minutes ago      Running             kube-apiserver              2                   d4ec47ede7d07       kube-apiserver-embed-certs-565837
	6c7b8ce4006da       8cab3d2a8bd0f       22 minutes ago      Running             kube-controller-manager     2                   1defb10f96a59       kube-controller-manager-embed-certs-565837
	325418e82c046       a9e7e6b294baf       22 minutes ago      Running             etcd                        2                   ddcc01a635840       etcd-embed-certs-565837
	
	
	==> containerd <==
	Jan 20 12:49:37 embed-certs-565837 containerd[559]: time="2025-01-20T12:49:37.381887632Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Jan 20 12:49:37 embed-certs-565837 containerd[559]: time="2025-01-20T12:49:37.384152052Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Jan 20 12:49:37 embed-certs-565837 containerd[559]: time="2025-01-20T12:49:37.384257680Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 20 12:50:14 embed-certs-565837 containerd[559]: time="2025-01-20T12:50:14.361952035Z" level=info msg="CreateContainer within sandbox \"eb6378f51655fc0b43a890bf9d0ba26de4e88ce1f7551bf35dd4e73fb9db3992\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,}"
	Jan 20 12:50:14 embed-certs-565837 containerd[559]: time="2025-01-20T12:50:14.386193614Z" level=info msg="CreateContainer within sandbox \"eb6378f51655fc0b43a890bf9d0ba26de4e88ce1f7551bf35dd4e73fb9db3992\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,} returns container id \"5d87de44d277710045a8c8231493223e33a982f7b730074f275af3f1d65628a0\""
	Jan 20 12:50:14 embed-certs-565837 containerd[559]: time="2025-01-20T12:50:14.387143417Z" level=info msg="StartContainer for \"5d87de44d277710045a8c8231493223e33a982f7b730074f275af3f1d65628a0\""
	Jan 20 12:50:14 embed-certs-565837 containerd[559]: time="2025-01-20T12:50:14.475219032Z" level=info msg="StartContainer for \"5d87de44d277710045a8c8231493223e33a982f7b730074f275af3f1d65628a0\" returns successfully"
	Jan 20 12:50:14 embed-certs-565837 containerd[559]: time="2025-01-20T12:50:14.534573256Z" level=info msg="shim disconnected" id=5d87de44d277710045a8c8231493223e33a982f7b730074f275af3f1d65628a0 namespace=k8s.io
	Jan 20 12:50:14 embed-certs-565837 containerd[559]: time="2025-01-20T12:50:14.534646182Z" level=warning msg="cleaning up after shim disconnected" id=5d87de44d277710045a8c8231493223e33a982f7b730074f275af3f1d65628a0 namespace=k8s.io
	Jan 20 12:50:14 embed-certs-565837 containerd[559]: time="2025-01-20T12:50:14.534656151Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 20 12:50:15 embed-certs-565837 containerd[559]: time="2025-01-20T12:50:15.203012872Z" level=info msg="RemoveContainer for \"724df52dad85079cbfe954bdea79552de5d24d916c21b09e6e22fffcb5e3a8d8\""
	Jan 20 12:50:15 embed-certs-565837 containerd[559]: time="2025-01-20T12:50:15.208765869Z" level=info msg="RemoveContainer for \"724df52dad85079cbfe954bdea79552de5d24d916c21b09e6e22fffcb5e3a8d8\" returns successfully"
	Jan 20 12:54:44 embed-certs-565837 containerd[559]: time="2025-01-20T12:54:44.361808095Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 20 12:54:44 embed-certs-565837 containerd[559]: time="2025-01-20T12:54:44.372234194Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Jan 20 12:54:44 embed-certs-565837 containerd[559]: time="2025-01-20T12:54:44.374926312Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Jan 20 12:54:44 embed-certs-565837 containerd[559]: time="2025-01-20T12:54:44.375025595Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 20 12:55:18 embed-certs-565837 containerd[559]: time="2025-01-20T12:55:18.362316674Z" level=info msg="CreateContainer within sandbox \"eb6378f51655fc0b43a890bf9d0ba26de4e88ce1f7551bf35dd4e73fb9db3992\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,}"
	Jan 20 12:55:18 embed-certs-565837 containerd[559]: time="2025-01-20T12:55:18.387026029Z" level=info msg="CreateContainer within sandbox \"eb6378f51655fc0b43a890bf9d0ba26de4e88ce1f7551bf35dd4e73fb9db3992\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,} returns container id \"ee78306d3d5e9c3129a3158b87446099268f536a26fa06eb8360ee97c38fbbec\""
	Jan 20 12:55:18 embed-certs-565837 containerd[559]: time="2025-01-20T12:55:18.388751613Z" level=info msg="StartContainer for \"ee78306d3d5e9c3129a3158b87446099268f536a26fa06eb8360ee97c38fbbec\""
	Jan 20 12:55:18 embed-certs-565837 containerd[559]: time="2025-01-20T12:55:18.478108847Z" level=info msg="StartContainer for \"ee78306d3d5e9c3129a3158b87446099268f536a26fa06eb8360ee97c38fbbec\" returns successfully"
	Jan 20 12:55:18 embed-certs-565837 containerd[559]: time="2025-01-20T12:55:18.525290741Z" level=info msg="shim disconnected" id=ee78306d3d5e9c3129a3158b87446099268f536a26fa06eb8360ee97c38fbbec namespace=k8s.io
	Jan 20 12:55:18 embed-certs-565837 containerd[559]: time="2025-01-20T12:55:18.525354752Z" level=warning msg="cleaning up after shim disconnected" id=ee78306d3d5e9c3129a3158b87446099268f536a26fa06eb8360ee97c38fbbec namespace=k8s.io
	Jan 20 12:55:18 embed-certs-565837 containerd[559]: time="2025-01-20T12:55:18.525370275Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 20 12:55:18 embed-certs-565837 containerd[559]: time="2025-01-20T12:55:18.922930350Z" level=info msg="RemoveContainer for \"5d87de44d277710045a8c8231493223e33a982f7b730074f275af3f1d65628a0\""
	Jan 20 12:55:18 embed-certs-565837 containerd[559]: time="2025-01-20T12:55:18.929421069Z" level=info msg="RemoveContainer for \"5d87de44d277710045a8c8231493223e33a982f7b730074f275af3f1d65628a0\" returns successfully"
	
	
	==> coredns [e05b244c8d4adb3c21642fe71db5e704e4600a3ad2f4ccacc7082990cfb8b20d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [f0fd665d58a576cbf9fd542c4c56793c3d290d44fe551c556a50feffaafa06aa] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-565837
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-565837
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77d80cf1517f5f1439721b28711982314b21bec9
	                    minikube.k8s.io/name=embed-certs-565837
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_20T12_33_45_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Jan 2025 12:33:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-565837
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Jan 2025 12:56:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Jan 2025 12:55:50 +0000   Mon, 20 Jan 2025 12:33:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Jan 2025 12:55:50 +0000   Mon, 20 Jan 2025 12:33:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Jan 2025 12:55:50 +0000   Mon, 20 Jan 2025 12:33:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Jan 2025 12:55:50 +0000   Mon, 20 Jan 2025 12:33:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.156
	  Hostname:    embed-certs-565837
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8af94d22e1ec4bf1bcda394ab57265e6
	  System UUID:                8af94d22-e1ec-4bf1-bcda-394ab57265e6
	  Boot ID:                    f8905f8f-6744-46af-8b39-c7eae28f6b5b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-42d6j                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     22m
	  kube-system                 coredns-668d6bf9bc-vbpfb                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     22m
	  kube-system                 etcd-embed-certs-565837                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         22m
	  kube-system                 kube-apiserver-embed-certs-565837             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-embed-certs-565837    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-8zz8b                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-embed-certs-565837             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 metrics-server-f79f97bbb-rv4lr                100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         22m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-m655x    0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-shd26         0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node embed-certs-565837 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node embed-certs-565837 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node embed-certs-565837 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node embed-certs-565837 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node embed-certs-565837 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m                kubelet          Node embed-certs-565837 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           22m                node-controller  Node embed-certs-565837 event: Registered Node embed-certs-565837 in Controller
	
	
	==> dmesg <==
	[  +0.057508] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.050426] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.348336] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Jan20 12:29] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.660706] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.136991] systemd-fstab-generator[482]: Ignoring "noauto" option for root device
	[  +0.070404] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070964] systemd-fstab-generator[494]: Ignoring "noauto" option for root device
	[  +0.227423] systemd-fstab-generator[508]: Ignoring "noauto" option for root device
	[  +0.140219] systemd-fstab-generator[520]: Ignoring "noauto" option for root device
	[  +0.340636] systemd-fstab-generator[551]: Ignoring "noauto" option for root device
	[  +1.605323] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +2.591990] systemd-fstab-generator[755]: Ignoring "noauto" option for root device
	[  +0.058655] kauditd_printk_skb: 186 callbacks suppressed
	[  +5.673009] kauditd_printk_skb: 69 callbacks suppressed
	[ +12.639337] kauditd_printk_skb: 90 callbacks suppressed
	[Jan20 12:33] systemd-fstab-generator[3100]: Ignoring "noauto" option for root device
	[  +1.805044] kauditd_printk_skb: 82 callbacks suppressed
	[  +4.831144] systemd-fstab-generator[3473]: Ignoring "noauto" option for root device
	[  +5.384375] systemd-fstab-generator[3576]: Ignoring "noauto" option for root device
	[  +0.132039] kauditd_printk_skb: 17 callbacks suppressed
	[Jan20 12:34] kauditd_printk_skb: 112 callbacks suppressed
	[  +7.188811] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [325418e82c046be9ecf33b373f993702d1a4443e2fc5f53fa8147a75c509acd9] <==
	{"level":"info","ts":"2025-01-20T12:34:38.905949Z","caller":"traceutil/trace.go:171","msg":"trace[550699722] linearizableReadLoop","detail":"{readStateIndex:616; appliedIndex:615; }","duration":"306.0169ms","start":"2025-01-20T12:34:38.599910Z","end":"2025-01-20T12:34:38.905927Z","steps":["trace[550699722] 'read index received'  (duration: 43.070133ms)","trace[550699722] 'applied index is now lower than readState.Index'  (duration: 262.944182ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-20T12:34:38.906267Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"306.343983ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T12:34:38.906352Z","caller":"traceutil/trace.go:171","msg":"trace[1505079512] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:594; }","duration":"306.458762ms","start":"2025-01-20T12:34:38.599887Z","end":"2025-01-20T12:34:38.906346Z","steps":["trace[1505079512] 'agreement among raft nodes before linearized reading'  (duration: 306.333513ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T12:34:38.906805Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-20T12:34:38.599875Z","time spent":"306.888932ms","remote":"127.0.0.1:42208","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-01-20T12:34:38.907000Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"279.870474ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T12:34:38.907127Z","caller":"traceutil/trace.go:171","msg":"trace[561304426] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:594; }","duration":"280.697888ms","start":"2025-01-20T12:34:38.626413Z","end":"2025-01-20T12:34:38.907111Z","steps":["trace[561304426] 'agreement among raft nodes before linearized reading'  (duration: 279.860579ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T12:34:58.791337Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.973529ms","expected-duration":"100ms","prefix":"","request":"header:<ID:646992789867711296 > lease_revoke:<id:08fa9483b44dca8e>","response":"size:28"}
	{"level":"info","ts":"2025-01-20T12:34:58.791645Z","caller":"traceutil/trace.go:171","msg":"trace[1348052798] linearizableReadLoop","detail":"{readStateIndex:648; appliedIndex:647; }","duration":"103.290865ms","start":"2025-01-20T12:34:58.688336Z","end":"2025-01-20T12:34:58.791627Z","steps":["trace[1348052798] 'read index received'  (duration: 38.216µs)","trace[1348052798] 'applied index is now lower than readState.Index'  (duration: 103.148451ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-20T12:34:58.791946Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.626227ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/\" range_end:\"/registry/leases0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2025-01-20T12:34:58.792062Z","caller":"traceutil/trace.go:171","msg":"trace[791845111] range","detail":"{range_begin:/registry/leases/; range_end:/registry/leases0; response_count:0; response_revision:621; }","duration":"103.779963ms","start":"2025-01-20T12:34:58.688272Z","end":"2025-01-20T12:34:58.792052Z","steps":["trace[791845111] 'agreement among raft nodes before linearized reading'  (duration: 103.525313ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T12:35:33.170040Z","caller":"traceutil/trace.go:171","msg":"trace[1846146442] transaction","detail":"{read_only:false; response_revision:657; number_of_response:1; }","duration":"130.03505ms","start":"2025-01-20T12:35:33.039986Z","end":"2025-01-20T12:35:33.170021Z","steps":["trace[1846146442] 'process raft request'  (duration: 129.460084ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T12:35:33.443178Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.122973ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T12:35:33.443287Z","caller":"traceutil/trace.go:171","msg":"trace[1192834492] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:657; }","duration":"127.307708ms","start":"2025-01-20T12:35:33.315961Z","end":"2025-01-20T12:35:33.443269Z","steps":["trace[1192834492] 'range keys from in-memory index tree'  (duration: 127.029364ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T12:35:33.443874Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.678863ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T12:35:33.443986Z","caller":"traceutil/trace.go:171","msg":"trace[850058159] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:657; }","duration":"163.808932ms","start":"2025-01-20T12:35:33.280165Z","end":"2025-01-20T12:35:33.443974Z","steps":["trace[850058159] 'range keys from in-memory index tree'  (duration: 163.608336ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T12:36:27.602248Z","caller":"traceutil/trace.go:171","msg":"trace[1629604690] transaction","detail":"{read_only:false; response_revision:720; number_of_response:1; }","duration":"112.049894ms","start":"2025-01-20T12:36:27.490170Z","end":"2025-01-20T12:36:27.602220Z","steps":["trace[1629604690] 'process raft request'  (duration: 111.669644ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T12:43:39.985367Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":861}
	{"level":"info","ts":"2025-01-20T12:43:40.047238Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":861,"took":"60.399963ms","hash":1056371531,"current-db-size-bytes":2961408,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":2961408,"current-db-size-in-use":"3.0 MB"}
	{"level":"info","ts":"2025-01-20T12:43:40.047366Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1056371531,"revision":861,"compact-revision":-1}
	{"level":"info","ts":"2025-01-20T12:48:39.994834Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1114}
	{"level":"info","ts":"2025-01-20T12:48:39.999582Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1114,"took":"3.957754ms","hash":3592283502,"current-db-size-bytes":2961408,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":1835008,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-20T12:48:39.999644Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3592283502,"revision":1114,"compact-revision":861}
	{"level":"info","ts":"2025-01-20T12:53:40.006923Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1366}
	{"level":"info","ts":"2025-01-20T12:53:40.012189Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1366,"took":"4.313902ms","hash":4236284373,"current-db-size-bytes":2961408,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":1835008,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-20T12:53:40.012265Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":4236284373,"revision":1366,"compact-revision":1114}
	
	
	==> kernel <==
	 12:56:06 up 27 min,  0 users,  load average: 0.25, 0.20, 0.18
	Linux embed-certs-565837 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [785d0437bf082403aa2a923850924d337958106bf01746687db79e49d2acd9bc] <==
	 > logger="UnhandledError"
	I0120 12:51:42.838886       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0120 12:53:41.835757       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 12:53:41.836042       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0120 12:53:42.838377       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 12:53:42.838570       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0120 12:53:42.838988       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 12:53:42.839101       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0120 12:53:42.839876       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0120 12:53:42.841021       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0120 12:54:42.840660       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 12:54:42.840979       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0120 12:54:42.841796       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 12:54:42.842005       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0120 12:54:42.842166       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0120 12:54:42.844035       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [6c7b8ce4006da1bb5f83ecc3c428c70ab2c03b6ab69e6ad765be4b52ea08e00c] <==
	E0120 12:51:18.657192       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:51:18.763151       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 12:51:48.665351       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:51:48.776286       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 12:52:18.672227       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:52:18.784949       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 12:52:48.679241       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:52:48.793255       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 12:53:18.686251       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:53:18.800986       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 12:53:48.693950       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:53:48.815082       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 12:54:18.700947       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:54:18.823173       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 12:54:48.707904       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:54:48.832304       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0120 12:54:56.380123       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="225.904µs"
	I0120 12:55:09.376370       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="101.586µs"
	E0120 12:55:18.714830       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:55:18.840050       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0120 12:55:18.940814       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="133.94µs"
	I0120 12:55:21.624492       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="115.012µs"
	E0120 12:55:48.721648       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:55:48.856767       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0120 12:55:50.301409       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="embed-certs-565837"
	
	
	==> kube-proxy [fd52f21d362c190f72d4f2083139b4944fbd60ac40dac3fef81669a41bebfa50] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0120 12:33:50.391131       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0120 12:33:50.416427       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.156"]
	E0120 12:33:50.416547       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0120 12:33:50.579969       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0120 12:33:50.580012       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0120 12:33:50.580104       1 server_linux.go:170] "Using iptables Proxier"
	I0120 12:33:50.664021       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0120 12:33:50.664425       1 server.go:497] "Version info" version="v1.32.0"
	I0120 12:33:50.664463       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0120 12:33:50.710795       1 config.go:199] "Starting service config controller"
	I0120 12:33:50.711019       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0120 12:33:50.711168       1 config.go:105] "Starting endpoint slice config controller"
	I0120 12:33:50.711273       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0120 12:33:50.720449       1 config.go:329] "Starting node config controller"
	I0120 12:33:50.720517       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0120 12:33:50.811621       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0120 12:33:50.811656       1 shared_informer.go:320] Caches are synced for service config
	I0120 12:33:50.820677       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [839d89b28bcc15b8832098f6367e78571ce29cee5c9e7bac9aaaba2dd5674983] <==
	W0120 12:33:41.928772       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0120 12:33:41.928932       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 12:33:41.929099       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0120 12:33:41.929232       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0120 12:33:41.929877       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0120 12:33:41.931206       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0120 12:33:41.936956       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0120 12:33:41.937000       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0120 12:33:42.761928       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0120 12:33:42.761985       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 12:33:42.861596       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0120 12:33:42.861726       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 12:33:42.960921       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0120 12:33:42.961191       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0120 12:33:42.961826       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0120 12:33:42.962138       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0120 12:33:43.059325       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0120 12:33:43.059407       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 12:33:43.135806       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0120 12:33:43.135868       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 12:33:43.140640       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0120 12:33:43.140740       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0120 12:33:43.346658       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0120 12:33:43.346760       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0120 12:33:45.717653       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 20 12:54:56 embed-certs-565837 kubelet[3480]: E0120 12:54:56.359308    3480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-rv4lr" podUID="9df96932-8f93-4fe2-9802-b0bc37a64f6c"
	Jan 20 12:55:06 embed-certs-565837 kubelet[3480]: I0120 12:55:06.361276    3480 scope.go:117] "RemoveContainer" containerID="5d87de44d277710045a8c8231493223e33a982f7b730074f275af3f1d65628a0"
	Jan 20 12:55:06 embed-certs-565837 kubelet[3480]: E0120 12:55:06.361914    3480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-m655x_kubernetes-dashboard(da118a49-7712-4ee0-8d03-1404fd852f04)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-m655x" podUID="da118a49-7712-4ee0-8d03-1404fd852f04"
	Jan 20 12:55:09 embed-certs-565837 kubelet[3480]: E0120 12:55:09.360226    3480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-rv4lr" podUID="9df96932-8f93-4fe2-9802-b0bc37a64f6c"
	Jan 20 12:55:18 embed-certs-565837 kubelet[3480]: I0120 12:55:18.358786    3480 scope.go:117] "RemoveContainer" containerID="5d87de44d277710045a8c8231493223e33a982f7b730074f275af3f1d65628a0"
	Jan 20 12:55:18 embed-certs-565837 kubelet[3480]: I0120 12:55:18.920607    3480 scope.go:117] "RemoveContainer" containerID="5d87de44d277710045a8c8231493223e33a982f7b730074f275af3f1d65628a0"
	Jan 20 12:55:18 embed-certs-565837 kubelet[3480]: I0120 12:55:18.921100    3480 scope.go:117] "RemoveContainer" containerID="ee78306d3d5e9c3129a3158b87446099268f536a26fa06eb8360ee97c38fbbec"
	Jan 20 12:55:18 embed-certs-565837 kubelet[3480]: E0120 12:55:18.921312    3480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-m655x_kubernetes-dashboard(da118a49-7712-4ee0-8d03-1404fd852f04)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-m655x" podUID="da118a49-7712-4ee0-8d03-1404fd852f04"
	Jan 20 12:55:21 embed-certs-565837 kubelet[3480]: I0120 12:55:21.608152    3480 scope.go:117] "RemoveContainer" containerID="ee78306d3d5e9c3129a3158b87446099268f536a26fa06eb8360ee97c38fbbec"
	Jan 20 12:55:21 embed-certs-565837 kubelet[3480]: E0120 12:55:21.608863    3480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-m655x_kubernetes-dashboard(da118a49-7712-4ee0-8d03-1404fd852f04)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-m655x" podUID="da118a49-7712-4ee0-8d03-1404fd852f04"
	Jan 20 12:55:22 embed-certs-565837 kubelet[3480]: E0120 12:55:22.362216    3480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-rv4lr" podUID="9df96932-8f93-4fe2-9802-b0bc37a64f6c"
	Jan 20 12:55:33 embed-certs-565837 kubelet[3480]: E0120 12:55:33.360156    3480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-rv4lr" podUID="9df96932-8f93-4fe2-9802-b0bc37a64f6c"
	Jan 20 12:55:35 embed-certs-565837 kubelet[3480]: I0120 12:55:35.359265    3480 scope.go:117] "RemoveContainer" containerID="ee78306d3d5e9c3129a3158b87446099268f536a26fa06eb8360ee97c38fbbec"
	Jan 20 12:55:35 embed-certs-565837 kubelet[3480]: E0120 12:55:35.359447    3480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-m655x_kubernetes-dashboard(da118a49-7712-4ee0-8d03-1404fd852f04)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-m655x" podUID="da118a49-7712-4ee0-8d03-1404fd852f04"
	Jan 20 12:55:44 embed-certs-565837 kubelet[3480]: E0120 12:55:44.407561    3480 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 20 12:55:44 embed-certs-565837 kubelet[3480]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 20 12:55:44 embed-certs-565837 kubelet[3480]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 20 12:55:44 embed-certs-565837 kubelet[3480]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 20 12:55:44 embed-certs-565837 kubelet[3480]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 20 12:55:45 embed-certs-565837 kubelet[3480]: E0120 12:55:45.359509    3480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-rv4lr" podUID="9df96932-8f93-4fe2-9802-b0bc37a64f6c"
	Jan 20 12:55:50 embed-certs-565837 kubelet[3480]: I0120 12:55:50.360003    3480 scope.go:117] "RemoveContainer" containerID="ee78306d3d5e9c3129a3158b87446099268f536a26fa06eb8360ee97c38fbbec"
	Jan 20 12:55:50 embed-certs-565837 kubelet[3480]: E0120 12:55:50.362399    3480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-m655x_kubernetes-dashboard(da118a49-7712-4ee0-8d03-1404fd852f04)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-m655x" podUID="da118a49-7712-4ee0-8d03-1404fd852f04"
	Jan 20 12:56:00 embed-certs-565837 kubelet[3480]: E0120 12:56:00.360206    3480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-rv4lr" podUID="9df96932-8f93-4fe2-9802-b0bc37a64f6c"
	Jan 20 12:56:05 embed-certs-565837 kubelet[3480]: I0120 12:56:05.358615    3480 scope.go:117] "RemoveContainer" containerID="ee78306d3d5e9c3129a3158b87446099268f536a26fa06eb8360ee97c38fbbec"
	Jan 20 12:56:05 embed-certs-565837 kubelet[3480]: E0120 12:56:05.359166    3480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-m655x_kubernetes-dashboard(da118a49-7712-4ee0-8d03-1404fd852f04)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-m655x" podUID="da118a49-7712-4ee0-8d03-1404fd852f04"
	
	
	==> kubernetes-dashboard [fc7381c6ddecf8bba7d5d864061d770cb4514ddd8db7361c22408a7b409e851f] <==
	2025/01/20 12:43:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:44:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:44:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:45:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:45:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:46:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:46:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:47:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:47:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:48:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:48:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:49:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:49:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:50:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:50:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:51:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:51:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:52:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:52:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:53:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:53:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:54:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:54:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:55:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:55:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [6086f179ade7ee209edb673bdd0ba688ad4f9d13246d3a4aa1e9c17b32c31231] <==
	I0120 12:33:52.825628       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0120 12:33:52.881040       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0120 12:33:52.881116       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0120 12:33:52.927218       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0120 12:33:52.929833       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-565837_7dbf5a8a-f350-49ff-9c3c-a03e0e366298!
	I0120 12:33:52.933829       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"acd3c42c-f16c-4a3a-b315-b8fbbf1569f3", APIVersion:"v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-565837_7dbf5a8a-f350-49ff-9c3c-a03e0e366298 became leader
	I0120 12:33:53.037641       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-565837_7dbf5a8a-f350-49ff-9c3c-a03e0e366298!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-565837 -n embed-certs-565837
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-565837 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-rv4lr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-565837 describe pod metrics-server-f79f97bbb-rv4lr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-565837 describe pod metrics-server-f79f97bbb-rv4lr: exit status 1 (64.989732ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-rv4lr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-565837 describe pod metrics-server-f79f97bbb-rv4lr: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (1639.59s)

                                                
                                    

Test pass (280/320)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 28.38
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.32.0/json-events 13.57
13 TestDownloadOnly/v1.32.0/preload-exists 0
17 TestDownloadOnly/v1.32.0/LogsDuration 0.07
18 TestDownloadOnly/v1.32.0/DeleteAll 0.14
19 TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.62
22 TestOffline 108.12
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 214.56
29 TestAddons/serial/Volcano 41.71
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 9.54
35 TestAddons/parallel/Registry 17.46
36 TestAddons/parallel/Ingress 22.14
37 TestAddons/parallel/InspektorGadget 11.69
38 TestAddons/parallel/MetricsServer 6.3
40 TestAddons/parallel/CSI 50.41
41 TestAddons/parallel/Headlamp 18.66
42 TestAddons/parallel/CloudSpanner 5.58
43 TestAddons/parallel/LocalPath 57.33
44 TestAddons/parallel/NvidiaDevicePlugin 6.66
45 TestAddons/parallel/Yakd 12.07
47 TestAddons/StoppedEnableDisable 91.27
48 TestCertOptions 57.21
49 TestCertExpiration 284.38
51 TestForceSystemdFlag 79.18
52 TestForceSystemdEnv 46.19
54 TestKVMDriverInstallOrUpdate 5
58 TestErrorSpam/setup 42.13
59 TestErrorSpam/start 0.36
60 TestErrorSpam/status 0.79
61 TestErrorSpam/pause 1.61
62 TestErrorSpam/unpause 1.85
63 TestErrorSpam/stop 4.59
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 56.21
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 43.4
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.01
75 TestFunctional/serial/CacheCmd/cache/add_local 2.11
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.61
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 60.74
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.53
86 TestFunctional/serial/LogsFileCmd 1.51
87 TestFunctional/serial/InvalidService 4.17
89 TestFunctional/parallel/ConfigCmd 0.38
90 TestFunctional/parallel/DashboardCmd 14.53
91 TestFunctional/parallel/DryRun 0.34
92 TestFunctional/parallel/InternationalLanguage 0.17
93 TestFunctional/parallel/StatusCmd 0.86
97 TestFunctional/parallel/ServiceCmdConnect 21.5
98 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/PersistentVolumeClaim 43.15
101 TestFunctional/parallel/SSHCmd 0.42
102 TestFunctional/parallel/CpCmd 1.41
103 TestFunctional/parallel/MySQL 25.19
104 TestFunctional/parallel/FileSync 0.22
105 TestFunctional/parallel/CertSync 1.42
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.47
113 TestFunctional/parallel/License 0.64
114 TestFunctional/parallel/Version/short 0.06
115 TestFunctional/parallel/Version/components 0.64
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
120 TestFunctional/parallel/ImageCommands/ImageBuild 4.43
121 TestFunctional/parallel/ImageCommands/Setup 1.99
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.48
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.87
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.26
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.47
138 TestFunctional/parallel/ImageCommands/ImageRemove 0.68
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.81
140 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.5
141 TestFunctional/parallel/ServiceCmd/DeployApp 12.44
142 TestFunctional/parallel/ServiceCmd/List 0.47
143 TestFunctional/parallel/ServiceCmd/JSONOutput 0.46
144 TestFunctional/parallel/ServiceCmd/HTTPS 0.35
145 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
146 TestFunctional/parallel/ServiceCmd/Format 0.35
147 TestFunctional/parallel/ProfileCmd/profile_list 0.37
148 TestFunctional/parallel/ServiceCmd/URL 0.32
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
150 TestFunctional/parallel/MountCmd/any-port 7.86
151 TestFunctional/parallel/MountCmd/specific-port 2.02
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.66
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 194.16
160 TestMultiControlPlane/serial/DeployApp 7.02
161 TestMultiControlPlane/serial/PingHostFromPods 1.21
162 TestMultiControlPlane/serial/AddWorkerNode 57.24
163 TestMultiControlPlane/serial/NodeLabels 0.07
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.88
165 TestMultiControlPlane/serial/CopyFile 13.17
166 TestMultiControlPlane/serial/StopSecondaryNode 91.67
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.66
168 TestMultiControlPlane/serial/RestartSecondaryNode 43.87
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.91
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 472.16
171 TestMultiControlPlane/serial/DeleteSecondaryNode 7.07
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.63
173 TestMultiControlPlane/serial/StopCluster 183.4
174 TestMultiControlPlane/serial/RestartCluster 165.66
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.63
176 TestMultiControlPlane/serial/AddSecondaryNode 72.74
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.86
181 TestJSONOutput/start/Command 83.14
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.67
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.63
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 6.61
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.2
209 TestMainNoArgs 0.05
210 TestMinikubeProfile 96.48
213 TestMountStart/serial/StartWithMountFirst 28.97
214 TestMountStart/serial/VerifyMountFirst 0.39
215 TestMountStart/serial/StartWithMountSecond 27.65
216 TestMountStart/serial/VerifyMountSecond 0.38
217 TestMountStart/serial/DeleteFirst 0.7
218 TestMountStart/serial/VerifyMountPostDelete 0.38
219 TestMountStart/serial/Stop 1.33
220 TestMountStart/serial/RestartStopped 24.75
221 TestMountStart/serial/VerifyMountPostStop 0.38
224 TestMultiNode/serial/FreshStart2Nodes 112.68
225 TestMultiNode/serial/DeployApp2Nodes 6.78
226 TestMultiNode/serial/PingHostFrom2Pods 0.82
227 TestMultiNode/serial/AddNode 51.97
228 TestMultiNode/serial/MultiNodeLabels 0.06
229 TestMultiNode/serial/ProfileList 0.58
230 TestMultiNode/serial/CopyFile 7.4
231 TestMultiNode/serial/StopNode 2.31
232 TestMultiNode/serial/StartAfterStop 36.29
233 TestMultiNode/serial/RestartKeepsNodes 332.13
234 TestMultiNode/serial/DeleteNode 2.26
235 TestMultiNode/serial/StopMultiNode 181.88
236 TestMultiNode/serial/RestartMultiNode 94.38
237 TestMultiNode/serial/ValidateNameConflict 45.77
242 TestPreload 269.7
244 TestScheduledStopUnix 117.07
248 TestRunningBinaryUpgrade 201.32
250 TestKubernetesUpgrade 166.44
253 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
254 TestNoKubernetes/serial/StartWithK8s 92.35
263 TestPause/serial/Start 137.04
264 TestNoKubernetes/serial/StartWithStopK8s 50.25
272 TestNetworkPlugins/group/false 3.3
276 TestNoKubernetes/serial/Start 34.83
277 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
278 TestNoKubernetes/serial/ProfileList 32.45
279 TestPause/serial/SecondStartNoReconfiguration 41.94
280 TestNoKubernetes/serial/Stop 1.61
281 TestNoKubernetes/serial/StartNoArgs 51.61
282 TestPause/serial/Pause 0.81
283 TestPause/serial/VerifyStatus 0.26
284 TestPause/serial/Unpause 0.7
285 TestPause/serial/PauseAgain 0.89
286 TestPause/serial/DeletePaused 0.84
287 TestPause/serial/VerifyDeletedResources 0.49
288 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
289 TestStoppedBinaryUpgrade/Setup 2.61
290 TestStoppedBinaryUpgrade/Upgrade 186.52
292 TestStartStop/group/old-k8s-version/serial/FirstStart 224.38
294 TestStartStop/group/no-preload/serial/FirstStart 158.78
295 TestStoppedBinaryUpgrade/MinikubeLogs 1.09
297 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 84.65
298 TestStartStop/group/no-preload/serial/DeployApp 10.34
299 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.06
300 TestStartStop/group/no-preload/serial/Stop 91.04
301 TestStartStop/group/old-k8s-version/serial/DeployApp 9.46
302 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.07
303 TestStartStop/group/old-k8s-version/serial/Stop 91.41
304 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.31
306 TestStartStop/group/newest-cni/serial/FirstStart 49.37
307 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.06
308 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.47
309 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
311 TestStartStop/group/newest-cni/serial/DeployApp 0
312 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.11
313 TestStartStop/group/newest-cni/serial/Stop 7.33
314 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
315 TestStartStop/group/newest-cni/serial/SecondStart 33.84
316 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
317 TestStartStop/group/old-k8s-version/serial/SecondStart 149.08
318 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
319 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
320 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
321 TestStartStop/group/newest-cni/serial/Pause 2.43
323 TestStartStop/group/embed-certs/serial/FirstStart 59.78
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.27
325 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 315.33
326 TestStartStop/group/embed-certs/serial/DeployApp 10.34
327 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.09
328 TestStartStop/group/embed-certs/serial/Stop 91.06
329 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
330 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
331 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
332 TestStartStop/group/old-k8s-version/serial/Pause 2.6
333 TestNetworkPlugins/group/kindnet/Start 61.85
334 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
336 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
337 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
338 TestNetworkPlugins/group/kindnet/NetCatPod 9.24
339 TestNetworkPlugins/group/kindnet/DNS 0.16
340 TestNetworkPlugins/group/kindnet/Localhost 0.13
341 TestNetworkPlugins/group/kindnet/HairPin 0.13
342 TestNetworkPlugins/group/auto/Start 90.79
343 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
344 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
345 TestNetworkPlugins/group/auto/KubeletFlags 0.21
346 TestNetworkPlugins/group/auto/NetCatPod 11.25
347 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
348 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.71
349 TestNetworkPlugins/group/flannel/Start 74.48
350 TestNetworkPlugins/group/auto/DNS 0.21
351 TestNetworkPlugins/group/auto/Localhost 0.13
352 TestNetworkPlugins/group/auto/HairPin 0.13
353 TestNetworkPlugins/group/enable-default-cni/Start 96.08
354 TestNetworkPlugins/group/flannel/ControllerPod 6.01
355 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
356 TestNetworkPlugins/group/flannel/NetCatPod 8.26
357 TestNetworkPlugins/group/flannel/DNS 0.16
358 TestNetworkPlugins/group/flannel/Localhost 0.21
359 TestNetworkPlugins/group/flannel/HairPin 0.38
360 TestNetworkPlugins/group/bridge/Start 62.44
361 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
362 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.32
363 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
364 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
365 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
366 TestNetworkPlugins/group/calico/Start 82.01
367 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
368 TestNetworkPlugins/group/bridge/NetCatPod 9.28
369 TestNetworkPlugins/group/bridge/DNS 0.17
370 TestNetworkPlugins/group/bridge/Localhost 0.15
371 TestNetworkPlugins/group/bridge/HairPin 0.18
372 TestNetworkPlugins/group/custom-flannel/Start 78.49
373 TestNetworkPlugins/group/calico/ControllerPod 6.02
374 TestNetworkPlugins/group/calico/KubeletFlags 0.38
375 TestNetworkPlugins/group/calico/NetCatPod 11.12
376 TestNetworkPlugins/group/calico/DNS 0.17
377 TestNetworkPlugins/group/calico/Localhost 0.17
378 TestNetworkPlugins/group/calico/HairPin 0.13
379 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
380 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.23
381 TestNetworkPlugins/group/custom-flannel/DNS 0.15
382 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
383 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (28.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-803330 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-803330 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (28.376218499s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (28.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0120 11:15:39.665485  537581 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0120 11:15:39.665590  537581 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20151-530330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-803330
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-803330: exit status 85 (68.527267ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-803330 | jenkins | v1.35.0 | 20 Jan 25 11:15 UTC |          |
	|         | -p download-only-803330        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 11:15:11
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 11:15:11.333912  537593 out.go:345] Setting OutFile to fd 1 ...
	I0120 11:15:11.334040  537593 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:15:11.334049  537593 out.go:358] Setting ErrFile to fd 2...
	I0120 11:15:11.334053  537593 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:15:11.334253  537593 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-530330/.minikube/bin
	W0120 11:15:11.334370  537593 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20151-530330/.minikube/config/config.json: open /home/jenkins/minikube-integration/20151-530330/.minikube/config/config.json: no such file or directory
	I0120 11:15:11.334917  537593 out.go:352] Setting JSON to true
	I0120 11:15:11.335939  537593 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3453,"bootTime":1737368258,"procs":290,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 11:15:11.336049  537593 start.go:139] virtualization: kvm guest
	I0120 11:15:11.338390  537593 out.go:97] [download-only-803330] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0120 11:15:11.338492  537593 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20151-530330/.minikube/cache/preloaded-tarball: no such file or directory
	I0120 11:15:11.338523  537593 notify.go:220] Checking for updates...
	I0120 11:15:11.339877  537593 out.go:169] MINIKUBE_LOCATION=20151
	I0120 11:15:11.341391  537593 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 11:15:11.342680  537593 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20151-530330/kubeconfig
	I0120 11:15:11.343899  537593 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-530330/.minikube
	I0120 11:15:11.345142  537593 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0120 11:15:11.347334  537593 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0120 11:15:11.347532  537593 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 11:15:11.378940  537593 out.go:97] Using the kvm2 driver based on user configuration
	I0120 11:15:11.378971  537593 start.go:297] selected driver: kvm2
	I0120 11:15:11.378980  537593 start.go:901] validating driver "kvm2" against <nil>
	I0120 11:15:11.379462  537593 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 11:15:11.379596  537593 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20151-530330/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 11:15:11.395557  537593 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 11:15:11.395612  537593 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 11:15:11.396160  537593 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0120 11:15:11.396307  537593 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0120 11:15:11.396337  537593 cni.go:84] Creating CNI manager for ""
	I0120 11:15:11.396393  537593 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0120 11:15:11.396406  537593 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0120 11:15:11.396470  537593 start.go:340] cluster config:
	{Name:download-only-803330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-803330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 11:15:11.396644  537593 iso.go:125] acquiring lock: {Name:mk734d848ce0e9a68d8d00ecbd0f5085f599b42f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 11:15:11.398898  537593 out.go:97] Downloading VM boot image ...
	I0120 11:15:11.398946  537593 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20151-530330/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0120 11:15:23.273749  537593 out.go:97] Starting "download-only-803330" primary control-plane node in "download-only-803330" cluster
	I0120 11:15:23.273783  537593 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0120 11:15:23.909678  537593 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0120 11:15:23.909716  537593 cache.go:56] Caching tarball of preloaded images
	I0120 11:15:23.909903  537593 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0120 11:15:23.911673  537593 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0120 11:15:23.911688  537593 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0120 11:15:24.551812  537593 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/20151-530330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-803330 host does not exist
	  To start a cluster, run: "minikube start -p download-only-803330"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-803330
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/json-events (13.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-011046 --force --alsologtostderr --kubernetes-version=v1.32.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-011046 --force --alsologtostderr --kubernetes-version=v1.32.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (13.564924955s)
--- PASS: TestDownloadOnly/v1.32.0/json-events (13.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/preload-exists
I0120 11:15:53.579359  537581 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
I0120 11:15:53.579418  537581 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20151-530330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-011046
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-011046: exit status 85 (65.910068ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-803330 | jenkins | v1.35.0 | 20 Jan 25 11:15 UTC |                     |
	|         | -p download-only-803330        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 20 Jan 25 11:15 UTC | 20 Jan 25 11:15 UTC |
	| delete  | -p download-only-803330        | download-only-803330 | jenkins | v1.35.0 | 20 Jan 25 11:15 UTC | 20 Jan 25 11:15 UTC |
	| start   | -o=json --download-only        | download-only-011046 | jenkins | v1.35.0 | 20 Jan 25 11:15 UTC |                     |
	|         | -p download-only-011046        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 11:15:40
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 11:15:40.056438  537862 out.go:345] Setting OutFile to fd 1 ...
	I0120 11:15:40.056539  537862 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:15:40.056550  537862 out.go:358] Setting ErrFile to fd 2...
	I0120 11:15:40.056558  537862 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:15:40.056737  537862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-530330/.minikube/bin
	I0120 11:15:40.057316  537862 out.go:352] Setting JSON to true
	I0120 11:15:40.058387  537862 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3482,"bootTime":1737368258,"procs":288,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 11:15:40.058510  537862 start.go:139] virtualization: kvm guest
	I0120 11:15:40.060545  537862 out.go:97] [download-only-011046] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 11:15:40.060660  537862 notify.go:220] Checking for updates...
	I0120 11:15:40.062063  537862 out.go:169] MINIKUBE_LOCATION=20151
	I0120 11:15:40.063388  537862 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 11:15:40.064765  537862 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20151-530330/kubeconfig
	I0120 11:15:40.065994  537862 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-530330/.minikube
	I0120 11:15:40.067070  537862 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0120 11:15:40.069032  537862 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0120 11:15:40.069257  537862 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 11:15:40.102867  537862 out.go:97] Using the kvm2 driver based on user configuration
	I0120 11:15:40.102889  537862 start.go:297] selected driver: kvm2
	I0120 11:15:40.102896  537862 start.go:901] validating driver "kvm2" against <nil>
	I0120 11:15:40.103195  537862 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 11:15:40.103294  537862 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20151-530330/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 11:15:40.118101  537862 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 11:15:40.118149  537862 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 11:15:40.118639  537862 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0120 11:15:40.118768  537862 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0120 11:15:40.118795  537862 cni.go:84] Creating CNI manager for ""
	I0120 11:15:40.118847  537862 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0120 11:15:40.118856  537862 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0120 11:15:40.118902  537862 start.go:340] cluster config:
	{Name:download-only-011046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:download-only-011046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 11:15:40.118997  537862 iso.go:125] acquiring lock: {Name:mk734d848ce0e9a68d8d00ecbd0f5085f599b42f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 11:15:40.120625  537862 out.go:97] Starting "download-only-011046" primary control-plane node in "download-only-011046" cluster
	I0120 11:15:40.120644  537862 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
	I0120 11:15:40.722920  537862 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-amd64.tar.lz4
	I0120 11:15:40.722966  537862 cache.go:56] Caching tarball of preloaded images
	I0120 11:15:40.723159  537862 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
	I0120 11:15:40.725126  537862 out.go:97] Downloading Kubernetes v1.32.0 preload ...
	I0120 11:15:40.725145  537862 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-amd64.tar.lz4 ...
	I0120 11:15:40.842948  537862 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:bb9e95697e147383ee2f722871c6c317 -> /home/jenkins/minikube-integration/20151-530330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-011046 host does not exist
	  To start a cluster, run: "minikube start -p download-only-011046"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-011046
--- PASS: TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I0120 11:15:54.179547  537581 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-017896 --alsologtostderr --binary-mirror http://127.0.0.1:39615 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-017896" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-017896
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestOffline (108.12s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-006323 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-006323 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m46.38517917s)
helpers_test.go:175: Cleaning up "offline-containerd-006323" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-006323
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-006323: (1.730340187s)
--- PASS: TestOffline (108.12s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-861226
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-861226: exit status 85 (61.932114ms)

                                                
                                                
-- stdout --
	* Profile "addons-861226" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-861226"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-861226
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-861226: exit status 85 (63.549613ms)

                                                
                                                
-- stdout --
	* Profile "addons-861226" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-861226"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (214.56s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-861226 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-861226 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m34.562513857s)
--- PASS: TestAddons/Setup (214.56s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.71s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:815: volcano-admission stabilized in 25.008618ms
addons_test.go:823: volcano-controller stabilized in 25.156659ms
addons_test.go:807: volcano-scheduler stabilized in 25.316529ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-7ff7cd6989-kfgxf" [22a0c991-cfe0-431b-a268-c3ef620082d3] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.005125486s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-57676bd54c-hx58n" [bbd9c8e3-0974-4e4e-97a4-39d7825c6455] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004017396s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-77df547cdf-ckd4q" [82fc8c1d-6608-4eb0-acd2-8a78ba94fbf9] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004225111s
addons_test.go:842: (dbg) Run:  kubectl --context addons-861226 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-861226 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-861226 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [18ce57d3-04c7-4608-a8d7-46c05214180c] Pending
helpers_test.go:344: "test-job-nginx-0" [18ce57d3-04c7-4608-a8d7-46c05214180c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [18ce57d3-04c7-4608-a8d7-46c05214180c] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.004426808s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-861226 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-861226 addons disable volcano --alsologtostderr -v=1: (11.306097338s)
--- PASS: TestAddons/serial/Volcano (41.71s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-861226 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-861226 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.54s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-861226 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-861226 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ed105c03-9c5a-4c54-a930-918ce507fca6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ed105c03-9c5a-4c54-a930-918ce507fca6] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.011262181s
addons_test.go:633: (dbg) Run:  kubectl --context addons-861226 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-861226 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-861226 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.54s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.46s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 6.135883ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c86875c6f-gvqg4" [d08ec980-955f-40f6-9de2-3bd321707d25] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.089856161s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-wzzwk" [3294d489-7bad-4dfb-bd96-1863ba2320cb] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004240217s
addons_test.go:331: (dbg) Run:  kubectl --context addons-861226 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-861226 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-861226 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.585962208s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-861226 ip
2025/01/20 11:20:46 [DEBUG] GET http://192.168.39.66:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-861226 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.46s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (22.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-861226 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-861226 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-861226 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [98594985-6693-4bd6-9225-25f44c6f2ee5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [98594985-6693-4bd6-9225-25f44c6f2ee5] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003574429s
I0120 11:20:58.854812  537581 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-861226 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-861226 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-861226 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.66
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-861226 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-861226 addons disable ingress-dns --alsologtostderr -v=1: (1.71811512s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-861226 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-861226 addons disable ingress --alsologtostderr -v=1: (7.899727929s)
--- PASS: TestAddons/parallel/Ingress (22.14s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.69s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-phd6r" [74daef20-c275-4139-a2f3-2638d9501a09] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.007020338s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-861226 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-861226 addons disable inspektor-gadget --alsologtostderr -v=1: (6.68594403s)
--- PASS: TestAddons/parallel/InspektorGadget (11.69s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.3s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 5.435906ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-6mr6h" [71bea700-badc-4932-b76d-1e6cacc17c64] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004825235s
addons_test.go:402: (dbg) Run:  kubectl --context addons-861226 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-861226 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-861226 addons disable metrics-server --alsologtostderr -v=1: (1.21987878s)
--- PASS: TestAddons/parallel/MetricsServer (6.30s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.41s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0120 11:20:36.049649  537581 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0120 11:20:36.054642  537581 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0120 11:20:36.054671  537581 kapi.go:107] duration metric: took 5.038566ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 5.048521ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-861226 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861226 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861226 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861226 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861226 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861226 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861226 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-861226 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [f63c11d2-76f5-4a57-812d-02c030c6de70] Pending
helpers_test.go:344: "task-pv-pod" [f63c11d2-76f5-4a57-812d-02c030c6de70] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [f63c11d2-76f5-4a57-812d-02c030c6de70] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.005440938s
addons_test.go:511: (dbg) Run:  kubectl --context addons-861226 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-861226 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-861226 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-861226 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-861226 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-861226 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861226 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861226 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861226 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861226 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861226 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861226 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861226 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861226 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861226 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861226 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861226 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861226 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861226 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861226 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861226 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861226 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861226 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861226 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861226 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861226 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861226 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-861226 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c0200bcb-3ee5-4118-819d-718d48b820f0] Pending
helpers_test.go:344: "task-pv-pod-restore" [c0200bcb-3ee5-4118-819d-718d48b820f0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c0200bcb-3ee5-4118-819d-718d48b820f0] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004357481s
addons_test.go:553: (dbg) Run:  kubectl --context addons-861226 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-861226 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-861226 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-861226 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-861226 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-861226 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.847363938s)
--- PASS: TestAddons/parallel/CSI (50.41s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-861226 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-69d78d796f-dfzkf" [b7950eed-988f-4d5f-9518-0c70d91ea6b6] Pending
helpers_test.go:344: "headlamp-69d78d796f-dfzkf" [b7950eed-988f-4d5f-9518-0c70d91ea6b6] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-69d78d796f-dfzkf" [b7950eed-988f-4d5f-9518-0c70d91ea6b6] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.005090681s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-861226 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-861226 addons disable headlamp --alsologtostderr -v=1: (6.785470828s)
--- PASS: TestAddons/parallel/Headlamp (18.66s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-8rhnq" [e2939643-37a8-411e-b445-c447400a4e53] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004452584s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-861226 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.33s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-861226 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-861226 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861226 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861226 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861226 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861226 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861226 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861226 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861226 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861226 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861226 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [52dc680f-e736-4b9d-8dc8-335ec2c293ac] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [52dc680f-e736-4b9d-8dc8-335ec2c293ac] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [52dc680f-e736-4b9d-8dc8-335ec2c293ac] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004645974s
addons_test.go:906: (dbg) Run:  kubectl --context addons-861226 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-861226 ssh "cat /opt/local-path-provisioner/pvc-69cc91a7-7073-41f5-9142-579fed9bc0b5_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-861226 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-861226 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-861226 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-861226 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.340719094s)
--- PASS: TestAddons/parallel/LocalPath (57.33s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.66s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-9cfjr" [fdca26a3-8fbe-46ff-977b-36823cd63360] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.095910583s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-861226 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.66s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-5rwnq" [21900fbb-4b1c-477d-9306-0ab6c0b9b4f2] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.006254788s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-861226 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-861226 addons disable yakd --alsologtostderr -v=1: (6.062454694s)
--- PASS: TestAddons/parallel/Yakd (12.07s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.27s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-861226
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-861226: (1m30.970605019s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-861226
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-861226
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-861226
--- PASS: TestAddons/StoppedEnableDisable (91.27s)

                                                
                                    
x
+
TestCertOptions (57.21s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-425567 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-425567 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (55.725717347s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-425567 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-425567 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-425567 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-425567" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-425567
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-425567: (1.015597649s)
--- PASS: TestCertOptions (57.21s)

                                                
                                    
x
+
TestCertExpiration (284.38s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-339313 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-339313 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m15.413035677s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-339313 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-339313 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (27.921659055s)
helpers_test.go:175: Cleaning up "cert-expiration-339313" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-339313
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-339313: (1.047397364s)
--- PASS: TestCertExpiration (284.38s)

                                                
                                    
x
+
TestForceSystemdFlag (79.18s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-817727 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E0120 12:19:29.428295  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-817727 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m18.183932628s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-817727 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-817727" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-817727
--- PASS: TestForceSystemdFlag (79.18s)

                                                
                                    
x
+
TestForceSystemdEnv (46.19s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-056789 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-056789 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (44.977689301s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-056789 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-056789" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-056789
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-056789: (1.008613558s)
--- PASS: TestForceSystemdEnv (46.19s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0120 12:17:24.857463  537581 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 12:17:24.857643  537581 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0120 12:17:24.893128  537581 install.go:62] docker-machine-driver-kvm2: exit status 1
W0120 12:17:24.893440  537581 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0120 12:17:24.893506  537581 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1765997890/001/docker-machine-driver-kvm2
I0120 12:17:25.134864  537581 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1765997890/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a660 0x530a660 0x530a660 0x530a660 0x530a660 0x530a660 0x530a660] Decompressors:map[bz2:0xc0000153d0 gz:0xc0000153d8 tar:0xc000015310 tar.bz2:0xc000015330 tar.gz:0xc000015350 tar.xz:0xc000015380 tar.zst:0xc0000153b0 tbz2:0xc000015330 tgz:0xc000015350 txz:0xc000015380 tzst:0xc0000153b0 xz:0xc0000153e0 zip:0xc000015400 zst:0xc0000153e8] Getters:map[file:0xc0020141c0 http:0xc000729d10 https:0xc000729d60] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0120 12:17:25.134922  537581 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1765997890/001/docker-machine-driver-kvm2
I0120 12:17:27.910532  537581 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 12:17:27.910629  537581 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0120 12:17:27.944255  537581 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0120 12:17:27.944299  537581 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0120 12:17:27.944382  537581 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0120 12:17:27.944419  537581 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1765997890/002/docker-machine-driver-kvm2
I0120 12:17:28.002628  537581 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1765997890/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a660 0x530a660 0x530a660 0x530a660 0x530a660 0x530a660 0x530a660] Decompressors:map[bz2:0xc0000153d0 gz:0xc0000153d8 tar:0xc000015310 tar.bz2:0xc000015330 tar.gz:0xc000015350 tar.xz:0xc000015380 tar.zst:0xc0000153b0 tbz2:0xc000015330 tgz:0xc000015350 txz:0xc000015380 tzst:0xc0000153b0 xz:0xc0000153e0 zip:0xc000015400 zst:0xc0000153e8] Getters:map[file:0xc001aadeb0 http:0xc0008a74f0 https:0xc0008a7540] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0120 12:17:28.002686  537581 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1765997890/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (5.00s)

                                                
                                    
x
+
TestErrorSpam/setup (42.13s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-212563 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-212563 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-212563 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-212563 --driver=kvm2  --container-runtime=containerd: (42.129661045s)
--- PASS: TestErrorSpam/setup (42.13s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212563 --log_dir /tmp/nospam-212563 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212563 --log_dir /tmp/nospam-212563 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212563 --log_dir /tmp/nospam-212563 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212563 --log_dir /tmp/nospam-212563 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212563 --log_dir /tmp/nospam-212563 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212563 --log_dir /tmp/nospam-212563 status
--- PASS: TestErrorSpam/status (0.79s)

                                                
                                    
x
+
TestErrorSpam/pause (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212563 --log_dir /tmp/nospam-212563 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212563 --log_dir /tmp/nospam-212563 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212563 --log_dir /tmp/nospam-212563 pause
--- PASS: TestErrorSpam/pause (1.61s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.85s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212563 --log_dir /tmp/nospam-212563 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212563 --log_dir /tmp/nospam-212563 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212563 --log_dir /tmp/nospam-212563 unpause
--- PASS: TestErrorSpam/unpause (1.85s)

                                                
                                    
x
+
TestErrorSpam/stop (4.59s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212563 --log_dir /tmp/nospam-212563 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-212563 --log_dir /tmp/nospam-212563 stop: (1.612498162s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212563 --log_dir /tmp/nospam-212563 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-212563 --log_dir /tmp/nospam-212563 stop: (1.413187069s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-212563 --log_dir /tmp/nospam-212563 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-212563 --log_dir /tmp/nospam-212563 stop: (1.561917788s)
--- PASS: TestErrorSpam/stop (4.59s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20151-530330/.minikube/files/etc/test/nested/copy/537581/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (56.21s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-935944 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0120 11:24:29.426582  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:24:29.433015  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:24:29.444413  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:24:29.465859  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:24:29.507330  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:24:29.588829  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:24:29.750426  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:24:30.072221  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:24:30.714350  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:24:31.996022  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:24:34.557998  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:24:39.679422  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:24:49.921633  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-935944 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (56.212700253s)
--- PASS: TestFunctional/serial/StartWithProxy (56.21s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (43.4s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0120 11:25:05.884742  537581 config.go:182] Loaded profile config "functional-935944": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-935944 --alsologtostderr -v=8
E0120 11:25:10.403895  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-935944 --alsologtostderr -v=8: (43.401393146s)
functional_test.go:663: soft start took 43.402291767s for "functional-935944" cluster.
I0120 11:25:49.286586  537581 config.go:182] Loaded profile config "functional-935944": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestFunctional/serial/SoftStart (43.40s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-935944 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 cache add registry.k8s.io/pause:3.3
E0120 11:25:51.365966  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-935944 cache add registry.k8s.io/pause:3.3: (1.065709209s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.01s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-935944 /tmp/TestFunctionalserialCacheCmdcacheadd_local1627269347/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 cache add minikube-local-cache-test:functional-935944
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-935944 cache add minikube-local-cache-test:functional-935944: (1.795512969s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 cache delete minikube-local-cache-test:functional-935944
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-935944
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-935944 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (223.820116ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 kubectl -- --context functional-935944 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-935944 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (60.74s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-935944 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-935944 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m0.735780281s)
functional_test.go:761: restart took 1m0.735900574s for "functional-935944" cluster.
I0120 11:26:57.556909  537581 config.go:182] Loaded profile config "functional-935944": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestFunctional/serial/ExtraConfig (60.74s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-935944 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-935944 logs: (1.533886705s)
--- PASS: TestFunctional/serial/LogsCmd (1.53s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 logs --file /tmp/TestFunctionalserialLogsFileCmd3717358832/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-935944 logs --file /tmp/TestFunctionalserialLogsFileCmd3717358832/001/logs.txt: (1.507498515s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.17s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-935944 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-935944
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-935944: exit status 115 (296.494341ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.70:30231 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-935944 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.17s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-935944 config get cpus: exit status 14 (65.281037ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-935944 config get cpus: exit status 14 (62.391549ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-935944 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-935944 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 546576: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.53s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-935944 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-935944 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (177.93333ms)

                                                
                                                
-- stdout --
	* [functional-935944] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20151
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20151-530330/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-530330/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 11:27:30.599471  546298 out.go:345] Setting OutFile to fd 1 ...
	I0120 11:27:30.599596  546298 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:27:30.599608  546298 out.go:358] Setting ErrFile to fd 2...
	I0120 11:27:30.599615  546298 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:27:30.599803  546298 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-530330/.minikube/bin
	I0120 11:27:30.600420  546298 out.go:352] Setting JSON to false
	I0120 11:27:30.601488  546298 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4193,"bootTime":1737368258,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 11:27:30.601610  546298 start.go:139] virtualization: kvm guest
	I0120 11:27:30.603714  546298 out.go:177] * [functional-935944] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 11:27:30.605369  546298 out.go:177]   - MINIKUBE_LOCATION=20151
	I0120 11:27:30.605371  546298 notify.go:220] Checking for updates...
	I0120 11:27:30.606833  546298 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 11:27:30.608346  546298 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20151-530330/kubeconfig
	I0120 11:27:30.609759  546298 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-530330/.minikube
	I0120 11:27:30.613434  546298 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 11:27:30.617843  546298 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 11:27:30.619653  546298 config.go:182] Loaded profile config "functional-935944": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 11:27:30.620192  546298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 11:27:30.620285  546298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:27:30.640926  546298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36991
	I0120 11:27:30.641538  546298 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:27:30.642248  546298 main.go:141] libmachine: Using API Version  1
	I0120 11:27:30.642287  546298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:27:30.642771  546298 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:27:30.642971  546298 main.go:141] libmachine: (functional-935944) Calling .DriverName
	I0120 11:27:30.643239  546298 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 11:27:30.643663  546298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 11:27:30.643715  546298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:27:30.660970  546298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33003
	I0120 11:27:30.661584  546298 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:27:30.666138  546298 main.go:141] libmachine: Using API Version  1
	I0120 11:27:30.666182  546298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:27:30.666754  546298 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:27:30.667001  546298 main.go:141] libmachine: (functional-935944) Calling .DriverName
	I0120 11:27:30.703796  546298 out.go:177] * Using the kvm2 driver based on existing profile
	I0120 11:27:30.705017  546298 start.go:297] selected driver: kvm2
	I0120 11:27:30.705030  546298 start.go:901] validating driver "kvm2" against &{Name:functional-935944 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:functional-935944 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.70 Port:8441 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/miniku
be-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 11:27:30.705156  546298 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 11:27:30.707278  546298 out.go:201] 
	W0120 11:27:30.708920  546298 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0120 11:27:30.710237  546298 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-935944 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-935944 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-935944 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (165.77155ms)

                                                
                                                
-- stdout --
	* [functional-935944] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20151
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20151-530330/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-530330/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 11:27:30.554549  546283 out.go:345] Setting OutFile to fd 1 ...
	I0120 11:27:30.554692  546283 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:27:30.554703  546283 out.go:358] Setting ErrFile to fd 2...
	I0120 11:27:30.554709  546283 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:27:30.555043  546283 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-530330/.minikube/bin
	I0120 11:27:30.555623  546283 out.go:352] Setting JSON to false
	I0120 11:27:30.556749  546283 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4192,"bootTime":1737368258,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 11:27:30.556843  546283 start.go:139] virtualization: kvm guest
	I0120 11:27:30.559312  546283 out.go:177] * [functional-935944] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0120 11:27:30.561105  546283 notify.go:220] Checking for updates...
	I0120 11:27:30.561152  546283 out.go:177]   - MINIKUBE_LOCATION=20151
	I0120 11:27:30.562770  546283 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 11:27:30.564316  546283 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20151-530330/kubeconfig
	I0120 11:27:30.565738  546283 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-530330/.minikube
	I0120 11:27:30.567450  546283 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 11:27:30.569098  546283 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 11:27:30.570880  546283 config.go:182] Loaded profile config "functional-935944": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 11:27:30.571270  546283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 11:27:30.571325  546283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:27:30.587722  546283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42981
	I0120 11:27:30.588200  546283 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:27:30.588931  546283 main.go:141] libmachine: Using API Version  1
	I0120 11:27:30.588955  546283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:27:30.589281  546283 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:27:30.589486  546283 main.go:141] libmachine: (functional-935944) Calling .DriverName
	I0120 11:27:30.589897  546283 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 11:27:30.590340  546283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 11:27:30.590419  546283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:27:30.607028  546283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38913
	I0120 11:27:30.607615  546283 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:27:30.608424  546283 main.go:141] libmachine: Using API Version  1
	I0120 11:27:30.608465  546283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:27:30.608810  546283 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:27:30.609043  546283 main.go:141] libmachine: (functional-935944) Calling .DriverName
	I0120 11:27:30.652836  546283 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0120 11:27:30.654314  546283 start.go:297] selected driver: kvm2
	I0120 11:27:30.654336  546283 start.go:901] validating driver "kvm2" against &{Name:functional-935944 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:functional-935944 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.70 Port:8441 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/miniku
be-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 11:27:30.654487  546283 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 11:27:30.656504  546283 out.go:201] 
	W0120 11:27:30.658772  546283 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0120 11:27:30.660177  546283 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (21.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-935944 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-935944 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-lr4wg" [ebd9725d-43dc-42fe-a40c-cf256ca0d4f1] Pending
helpers_test.go:344: "hello-node-connect-58f9cf68d8-lr4wg" [ebd9725d-43dc-42fe-a40c-cf256ca0d4f1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-lr4wg" [ebd9725d-43dc-42fe-a40c-cf256ca0d4f1] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 21.00498887s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.70:30307
functional_test.go:1675: http://192.168.39.70:30307: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-lr4wg

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.70:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.70:30307
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (21.50s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (43.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [1ef74eba-4b49-44ae-91ee-606acb89421c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004284101s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-935944 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-935944 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-935944 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-935944 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-935944 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [726672e6-d9fd-4326-bd68-8543fa5d16eb] Pending
helpers_test.go:344: "sp-pod" [726672e6-d9fd-4326-bd68-8543fa5d16eb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [726672e6-d9fd-4326-bd68-8543fa5d16eb] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.004384599s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-935944 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-935944 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-935944 delete -f testdata/storage-provisioner/pod.yaml: (1.055265575s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-935944 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c2818e21-f0ed-4fa9-82ea-070d0f9bd5a5] Pending
helpers_test.go:344: "sp-pod" [c2818e21-f0ed-4fa9-82ea-070d0f9bd5a5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c2818e21-f0ed-4fa9-82ea-070d0f9bd5a5] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004931881s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-935944 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (43.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 ssh -n functional-935944 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 cp functional-935944:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd328700607/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 ssh -n functional-935944 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 ssh -n functional-935944 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-935944 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-9wxwg" [40fbc48e-fce4-4dfd-ac76-e2b2c1cfb0d3] Pending
helpers_test.go:344: "mysql-58ccfd96bb-9wxwg" [40fbc48e-fce4-4dfd-ac76-e2b2c1cfb0d3] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-9wxwg" [40fbc48e-fce4-4dfd-ac76-e2b2c1cfb0d3] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.007249861s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-935944 exec mysql-58ccfd96bb-9wxwg -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-935944 exec mysql-58ccfd96bb-9wxwg -- mysql -ppassword -e "show databases;": exit status 1 (220.106723ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0120 11:27:25.736995  537581 retry.go:31] will retry after 878.105019ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-935944 exec mysql-58ccfd96bb-9wxwg -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-935944 exec mysql-58ccfd96bb-9wxwg -- mysql -ppassword -e "show databases;": exit status 1 (228.467387ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0120 11:27:26.844501  537581 retry.go:31] will retry after 2.005378891s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-935944 exec mysql-58ccfd96bb-9wxwg -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-935944 exec mysql-58ccfd96bb-9wxwg -- mysql -ppassword -e "show databases;": exit status 1 (162.347423ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0120 11:27:29.012592  537581 retry.go:31] will retry after 1.327135757s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-935944 exec mysql-58ccfd96bb-9wxwg -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.19s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/537581/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 ssh "sudo cat /etc/test/nested/copy/537581/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/537581.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 ssh "sudo cat /etc/ssl/certs/537581.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/537581.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 ssh "sudo cat /usr/share/ca-certificates/537581.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/5375812.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 ssh "sudo cat /etc/ssl/certs/5375812.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/5375812.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 ssh "sudo cat /usr/share/ca-certificates/5375812.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-935944 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-935944 ssh "sudo systemctl is-active docker": exit status 1 (241.186186ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-935944 ssh "sudo systemctl is-active crio": exit status 1 (230.771492ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-935944 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.0
registry.k8s.io/kube-proxy:v1.32.0
registry.k8s.io/kube-controller-manager:v1.32.0
registry.k8s.io/kube-apiserver:v1.32.0
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-935944
docker.io/kindest/kindnetd:v20241108-5c6d2daf
docker.io/kicbase/echo-server:functional-935944
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-935944 image ls --format short --alsologtostderr:
I0120 11:27:32.570777  546568 out.go:345] Setting OutFile to fd 1 ...
I0120 11:27:32.571198  546568 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 11:27:32.571213  546568 out.go:358] Setting ErrFile to fd 2...
I0120 11:27:32.571218  546568 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 11:27:32.571456  546568 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-530330/.minikube/bin
I0120 11:27:32.572255  546568 config.go:182] Loaded profile config "functional-935944": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 11:27:32.572382  546568 config.go:182] Loaded profile config "functional-935944": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 11:27:32.572852  546568 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 11:27:32.572927  546568 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 11:27:32.590848  546568 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43603
I0120 11:27:32.591435  546568 main.go:141] libmachine: () Calling .GetVersion
I0120 11:27:32.592285  546568 main.go:141] libmachine: Using API Version  1
I0120 11:27:32.592322  546568 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 11:27:32.592724  546568 main.go:141] libmachine: () Calling .GetMachineName
I0120 11:27:32.593251  546568 main.go:141] libmachine: (functional-935944) Calling .GetState
I0120 11:27:32.595488  546568 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 11:27:32.595548  546568 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 11:27:32.612430  546568 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40035
I0120 11:27:32.612946  546568 main.go:141] libmachine: () Calling .GetVersion
I0120 11:27:32.613544  546568 main.go:141] libmachine: Using API Version  1
I0120 11:27:32.613577  546568 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 11:27:32.613986  546568 main.go:141] libmachine: () Calling .GetMachineName
I0120 11:27:32.614227  546568 main.go:141] libmachine: (functional-935944) Calling .DriverName
I0120 11:27:32.614438  546568 ssh_runner.go:195] Run: systemctl --version
I0120 11:27:32.614464  546568 main.go:141] libmachine: (functional-935944) Calling .GetSSHHostname
I0120 11:27:32.617773  546568 main.go:141] libmachine: (functional-935944) DBG | domain functional-935944 has defined MAC address 52:54:00:cb:d3:56 in network mk-functional-935944
I0120 11:27:32.618217  546568 main.go:141] libmachine: (functional-935944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:d3:56", ip: ""} in network mk-functional-935944: {Iface:virbr1 ExpiryTime:2025-01-20 12:24:25 +0000 UTC Type:0 Mac:52:54:00:cb:d3:56 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:functional-935944 Clientid:01:52:54:00:cb:d3:56}
I0120 11:27:32.618245  546568 main.go:141] libmachine: (functional-935944) DBG | domain functional-935944 has defined IP address 192.168.39.70 and MAC address 52:54:00:cb:d3:56 in network mk-functional-935944
I0120 11:27:32.618390  546568 main.go:141] libmachine: (functional-935944) Calling .GetSSHPort
I0120 11:27:32.618621  546568 main.go:141] libmachine: (functional-935944) Calling .GetSSHKeyPath
I0120 11:27:32.618814  546568 main.go:141] libmachine: (functional-935944) Calling .GetSSHUsername
I0120 11:27:32.619025  546568 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/functional-935944/id_rsa Username:docker}
I0120 11:27:32.712539  546568 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 11:27:32.807445  546568 main.go:141] libmachine: Making call to close driver server
I0120 11:27:32.807461  546568 main.go:141] libmachine: (functional-935944) Calling .Close
I0120 11:27:32.807783  546568 main.go:141] libmachine: Successfully made call to close driver server
I0120 11:27:32.807818  546568 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 11:27:32.807829  546568 main.go:141] libmachine: (functional-935944) DBG | Closing plugin on server side
I0120 11:27:32.807832  546568 main.go:141] libmachine: Making call to close driver server
I0120 11:27:32.807863  546568 main.go:141] libmachine: (functional-935944) Calling .Close
I0120 11:27:32.808137  546568 main.go:141] libmachine: Successfully made call to close driver server
I0120 11:27:32.808154  546568 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 11:27:32.808158  546568 main.go:141] libmachine: (functional-935944) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-935944 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/kube-controller-manager     | v1.32.0            | sha256:8cab3d | 26.3MB |
| docker.io/kicbase/echo-server               | functional-935944  | sha256:9056ab | 2.37MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| localhost/my-image                          | functional-935944  | sha256:7c33d3 | 775kB  |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:c69fa2 | 18.6MB |
| docker.io/library/minikube-local-cache-test | functional-935944  | sha256:f089ba | 990B   |
| docker.io/library/mysql                     | 5.7                | sha256:510733 | 138MB  |
| registry.k8s.io/etcd                        | 3.5.16-0           | sha256:a9e7e6 | 57.7MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| docker.io/kindest/kindnetd                  | v20241108-5c6d2daf | sha256:50415e | 38.6MB |
| registry.k8s.io/kube-proxy                  | v1.32.0            | sha256:040f9f | 30.9MB |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| docker.io/library/nginx                     | latest             | sha256:9bea9f | 72.1MB |
| registry.k8s.io/kube-apiserver              | v1.32.0            | sha256:c2e17b | 28.7MB |
| registry.k8s.io/kube-scheduler              | v1.32.0            | sha256:a389e1 | 20.7MB |
| registry.k8s.io/pause                       | 3.10               | sha256:873ed7 | 320kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-935944 image ls --format table --alsologtostderr:
I0120 11:27:37.816419  546924 out.go:345] Setting OutFile to fd 1 ...
I0120 11:27:37.816560  546924 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 11:27:37.816571  546924 out.go:358] Setting ErrFile to fd 2...
I0120 11:27:37.816578  546924 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 11:27:37.816751  546924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-530330/.minikube/bin
I0120 11:27:37.817427  546924 config.go:182] Loaded profile config "functional-935944": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 11:27:37.817566  546924 config.go:182] Loaded profile config "functional-935944": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 11:27:37.818022  546924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 11:27:37.818084  546924 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 11:27:37.833852  546924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43065
I0120 11:27:37.834345  546924 main.go:141] libmachine: () Calling .GetVersion
I0120 11:27:37.834954  546924 main.go:141] libmachine: Using API Version  1
I0120 11:27:37.834978  546924 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 11:27:37.835321  546924 main.go:141] libmachine: () Calling .GetMachineName
I0120 11:27:37.835525  546924 main.go:141] libmachine: (functional-935944) Calling .GetState
I0120 11:27:37.837324  546924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 11:27:37.837372  546924 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 11:27:37.853399  546924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38773
I0120 11:27:37.854030  546924 main.go:141] libmachine: () Calling .GetVersion
I0120 11:27:37.854667  546924 main.go:141] libmachine: Using API Version  1
I0120 11:27:37.854710  546924 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 11:27:37.855053  546924 main.go:141] libmachine: () Calling .GetMachineName
I0120 11:27:37.855248  546924 main.go:141] libmachine: (functional-935944) Calling .DriverName
I0120 11:27:37.855455  546924 ssh_runner.go:195] Run: systemctl --version
I0120 11:27:37.855494  546924 main.go:141] libmachine: (functional-935944) Calling .GetSSHHostname
I0120 11:27:37.858028  546924 main.go:141] libmachine: (functional-935944) DBG | domain functional-935944 has defined MAC address 52:54:00:cb:d3:56 in network mk-functional-935944
I0120 11:27:37.858499  546924 main.go:141] libmachine: (functional-935944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:d3:56", ip: ""} in network mk-functional-935944: {Iface:virbr1 ExpiryTime:2025-01-20 12:24:25 +0000 UTC Type:0 Mac:52:54:00:cb:d3:56 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:functional-935944 Clientid:01:52:54:00:cb:d3:56}
I0120 11:27:37.858537  546924 main.go:141] libmachine: (functional-935944) DBG | domain functional-935944 has defined IP address 192.168.39.70 and MAC address 52:54:00:cb:d3:56 in network mk-functional-935944
I0120 11:27:37.858709  546924 main.go:141] libmachine: (functional-935944) Calling .GetSSHPort
I0120 11:27:37.858873  546924 main.go:141] libmachine: (functional-935944) Calling .GetSSHKeyPath
I0120 11:27:37.859032  546924 main.go:141] libmachine: (functional-935944) Calling .GetSSHUsername
I0120 11:27:37.859208  546924 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/functional-935944/id_rsa Username:docker}
I0120 11:27:37.949842  546924 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 11:27:38.005771  546924 main.go:141] libmachine: Making call to close driver server
I0120 11:27:38.005789  546924 main.go:141] libmachine: (functional-935944) Calling .Close
I0120 11:27:38.006158  546924 main.go:141] libmachine: Successfully made call to close driver server
I0120 11:27:38.006184  546924 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 11:27:38.006192  546924 main.go:141] libmachine: Making call to close driver server
I0120 11:27:38.006200  546924 main.go:141] libmachine: (functional-935944) Calling .Close
I0120 11:27:38.006452  546924 main.go:141] libmachine: (functional-935944) DBG | Closing plugin on server side
I0120 11:27:38.006535  546924 main.go:141] libmachine: Successfully made call to close driver server
I0120 11:27:38.006573  546924 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-935944 image ls --format json --alsologtostderr:
[{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:7c33d3a5f6d02519f42f82c53f0331f4252b1e7548a5b8254d4faf9cbe252428","repoDigests":[],"repoTags":["localhost/my-image:functional-935944"],"size":"774888"},{"id":"sha256:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:ebc0ce2d7e647dd97980ec338ad81496c111741ab4ad05e7c5d37539aaf7dc3b"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.0"],"size":"28670542"},{"id":"sha256:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08","repoDigests":["registry.k8s.io/kube-proxy@sha256:6aee00d0c7f4869144d1bdbbed7572cd55fd1a4d58fef5a21f53836054cb39b4"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.0"],"size":"30906462"},{"id":"sha256:c69fa2e9c
bf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"18562039"},{"id":"sha256:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:c8faedf1a5f3981ffade770c696b676d30613681a95be3287c1f7eec50e49b6d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.0"],"size":"26254834"},{"id":"sha256:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5","repoDigests":["registry.k8s.io/kube-scheduler@sha256:84c998f7610b356a5eed24f801c01b273cf3e83f081f25c9b16aa8136c2cafb1"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.0"],"size":"20656471"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f1
6b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-935944"],"size":"2372971"},{"id":"sha256:f089ba87beb961d54a90fbe321881042511b3e5b80f1ce516372f3f00467c189","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-935944"],"size":"990"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:350b164e7a
e1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"320368"},{"id":"sha256:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e","repoDigests":["docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"38601118"},{"id":"sha256:9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1","repoDigests":["docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a"],"repoTags":["docker.io/library/nginx:latest"],"size":"72080558"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry
.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"57680541"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-935944 image ls --format json --alsologtostderr:
I0120 11:27:37.560927  546901 out.go:345] Setting OutFile to fd 1 ...
I0120 11:27:37.561043  546901 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 11:27:37.561053  546901 out.go:358] Setting ErrFile to fd 2...
I0120 11:27:37.561057  546901 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 11:27:37.561276  546901 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-530330/.minikube/bin
I0120 11:27:37.561961  546901 config.go:182] Loaded profile config "functional-935944": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 11:27:37.562066  546901 config.go:182] Loaded profile config "functional-935944": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 11:27:37.562408  546901 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 11:27:37.562471  546901 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 11:27:37.578031  546901 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44067
I0120 11:27:37.578552  546901 main.go:141] libmachine: () Calling .GetVersion
I0120 11:27:37.579307  546901 main.go:141] libmachine: Using API Version  1
I0120 11:27:37.579332  546901 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 11:27:37.579779  546901 main.go:141] libmachine: () Calling .GetMachineName
I0120 11:27:37.580080  546901 main.go:141] libmachine: (functional-935944) Calling .GetState
I0120 11:27:37.582241  546901 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 11:27:37.582285  546901 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 11:27:37.598238  546901 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37735
I0120 11:27:37.598720  546901 main.go:141] libmachine: () Calling .GetVersion
I0120 11:27:37.599298  546901 main.go:141] libmachine: Using API Version  1
I0120 11:27:37.599332  546901 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 11:27:37.599699  546901 main.go:141] libmachine: () Calling .GetMachineName
I0120 11:27:37.599942  546901 main.go:141] libmachine: (functional-935944) Calling .DriverName
I0120 11:27:37.600158  546901 ssh_runner.go:195] Run: systemctl --version
I0120 11:27:37.600197  546901 main.go:141] libmachine: (functional-935944) Calling .GetSSHHostname
I0120 11:27:37.603049  546901 main.go:141] libmachine: (functional-935944) DBG | domain functional-935944 has defined MAC address 52:54:00:cb:d3:56 in network mk-functional-935944
I0120 11:27:37.603650  546901 main.go:141] libmachine: (functional-935944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:d3:56", ip: ""} in network mk-functional-935944: {Iface:virbr1 ExpiryTime:2025-01-20 12:24:25 +0000 UTC Type:0 Mac:52:54:00:cb:d3:56 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:functional-935944 Clientid:01:52:54:00:cb:d3:56}
I0120 11:27:37.603685  546901 main.go:141] libmachine: (functional-935944) DBG | domain functional-935944 has defined IP address 192.168.39.70 and MAC address 52:54:00:cb:d3:56 in network mk-functional-935944
I0120 11:27:37.603806  546901 main.go:141] libmachine: (functional-935944) Calling .GetSSHPort
I0120 11:27:37.604014  546901 main.go:141] libmachine: (functional-935944) Calling .GetSSHKeyPath
I0120 11:27:37.604179  546901 main.go:141] libmachine: (functional-935944) Calling .GetSSHUsername
I0120 11:27:37.604333  546901 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/functional-935944/id_rsa Username:docker}
I0120 11:27:37.693538  546901 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 11:27:37.760464  546901 main.go:141] libmachine: Making call to close driver server
I0120 11:27:37.760483  546901 main.go:141] libmachine: (functional-935944) Calling .Close
I0120 11:27:37.760843  546901 main.go:141] libmachine: Successfully made call to close driver server
I0120 11:27:37.760908  546901 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 11:27:37.760919  546901 main.go:141] libmachine: (functional-935944) DBG | Closing plugin on server side
I0120 11:27:37.760929  546901 main.go:141] libmachine: Making call to close driver server
I0120 11:27:37.760940  546901 main.go:141] libmachine: (functional-935944) Calling .Close
I0120 11:27:37.761194  546901 main.go:141] libmachine: Successfully made call to close driver server
I0120 11:27:37.761221  546901 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-935944 image ls --format yaml --alsologtostderr:
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-935944
size: "2372971"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "18562039"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:f089ba87beb961d54a90fbe321881042511b3e5b80f1ce516372f3f00467c189
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-935944
size: "990"
- id: sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "57680541"
- id: sha256:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:ebc0ce2d7e647dd97980ec338ad81496c111741ab4ad05e7c5d37539aaf7dc3b
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.0
size: "28670542"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:c8faedf1a5f3981ffade770c696b676d30613681a95be3287c1f7eec50e49b6d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.0
size: "26254834"
- id: sha256:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e
repoDigests:
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "38601118"
- id: sha256:9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1
repoDigests:
- docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a
repoTags:
- docker.io/library/nginx:latest
size: "72080558"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08
repoDigests:
- registry.k8s.io/kube-proxy@sha256:6aee00d0c7f4869144d1bdbbed7572cd55fd1a4d58fef5a21f53836054cb39b4
repoTags:
- registry.k8s.io/kube-proxy:v1.32.0
size: "30906462"
- id: sha256:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:84c998f7610b356a5eed24f801c01b273cf3e83f081f25c9b16aa8136c2cafb1
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.0
size: "20656471"
- id: sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "320368"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-935944 image ls --format yaml --alsologtostderr:
I0120 11:27:32.877250  546601 out.go:345] Setting OutFile to fd 1 ...
I0120 11:27:32.877421  546601 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 11:27:32.877435  546601 out.go:358] Setting ErrFile to fd 2...
I0120 11:27:32.877446  546601 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 11:27:32.877792  546601 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-530330/.minikube/bin
I0120 11:27:32.878702  546601 config.go:182] Loaded profile config "functional-935944": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 11:27:32.878875  546601 config.go:182] Loaded profile config "functional-935944": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 11:27:32.879492  546601 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 11:27:32.879568  546601 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 11:27:32.896219  546601 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45005
I0120 11:27:32.896794  546601 main.go:141] libmachine: () Calling .GetVersion
I0120 11:27:32.897868  546601 main.go:141] libmachine: Using API Version  1
I0120 11:27:32.897904  546601 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 11:27:32.898276  546601 main.go:141] libmachine: () Calling .GetMachineName
I0120 11:27:32.898553  546601 main.go:141] libmachine: (functional-935944) Calling .GetState
I0120 11:27:32.900669  546601 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 11:27:32.900718  546601 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 11:27:32.916278  546601 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37565
I0120 11:27:32.916780  546601 main.go:141] libmachine: () Calling .GetVersion
I0120 11:27:32.917424  546601 main.go:141] libmachine: Using API Version  1
I0120 11:27:32.917459  546601 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 11:27:32.917839  546601 main.go:141] libmachine: () Calling .GetMachineName
I0120 11:27:32.918028  546601 main.go:141] libmachine: (functional-935944) Calling .DriverName
I0120 11:27:32.918218  546601 ssh_runner.go:195] Run: systemctl --version
I0120 11:27:32.918244  546601 main.go:141] libmachine: (functional-935944) Calling .GetSSHHostname
I0120 11:27:32.921463  546601 main.go:141] libmachine: (functional-935944) DBG | domain functional-935944 has defined MAC address 52:54:00:cb:d3:56 in network mk-functional-935944
I0120 11:27:32.921887  546601 main.go:141] libmachine: (functional-935944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:d3:56", ip: ""} in network mk-functional-935944: {Iface:virbr1 ExpiryTime:2025-01-20 12:24:25 +0000 UTC Type:0 Mac:52:54:00:cb:d3:56 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:functional-935944 Clientid:01:52:54:00:cb:d3:56}
I0120 11:27:32.921927  546601 main.go:141] libmachine: (functional-935944) DBG | domain functional-935944 has defined IP address 192.168.39.70 and MAC address 52:54:00:cb:d3:56 in network mk-functional-935944
I0120 11:27:32.922027  546601 main.go:141] libmachine: (functional-935944) Calling .GetSSHPort
I0120 11:27:32.922206  546601 main.go:141] libmachine: (functional-935944) Calling .GetSSHKeyPath
I0120 11:27:32.922380  546601 main.go:141] libmachine: (functional-935944) Calling .GetSSHUsername
I0120 11:27:32.922540  546601 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/functional-935944/id_rsa Username:docker}
I0120 11:27:33.018806  546601 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 11:27:33.075620  546601 main.go:141] libmachine: Making call to close driver server
I0120 11:27:33.075642  546601 main.go:141] libmachine: (functional-935944) Calling .Close
I0120 11:27:33.075956  546601 main.go:141] libmachine: Successfully made call to close driver server
I0120 11:27:33.076031  546601 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 11:27:33.076048  546601 main.go:141] libmachine: Making call to close driver server
I0120 11:27:33.076058  546601 main.go:141] libmachine: (functional-935944) Calling .Close
I0120 11:27:33.076363  546601 main.go:141] libmachine: Successfully made call to close driver server
I0120 11:27:33.076379  546601 main.go:141] libmachine: (functional-935944) DBG | Closing plugin on server side
I0120 11:27:33.076382  546601 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-935944 ssh pgrep buildkitd: exit status 1 (207.450333ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 image build -t localhost/my-image:functional-935944 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-935944 image build -t localhost/my-image:functional-935944 testdata/build --alsologtostderr: (3.931353581s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-935944 image build -t localhost/my-image:functional-935944 testdata/build --alsologtostderr:
I0120 11:27:33.339783  546654 out.go:345] Setting OutFile to fd 1 ...
I0120 11:27:33.340044  546654 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 11:27:33.340052  546654 out.go:358] Setting ErrFile to fd 2...
I0120 11:27:33.340057  546654 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 11:27:33.340323  546654 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-530330/.minikube/bin
I0120 11:27:33.340985  546654 config.go:182] Loaded profile config "functional-935944": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 11:27:33.341645  546654 config.go:182] Loaded profile config "functional-935944": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 11:27:33.342099  546654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 11:27:33.342164  546654 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 11:27:33.358812  546654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45031
I0120 11:27:33.359334  546654 main.go:141] libmachine: () Calling .GetVersion
I0120 11:27:33.359959  546654 main.go:141] libmachine: Using API Version  1
I0120 11:27:33.359993  546654 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 11:27:33.360415  546654 main.go:141] libmachine: () Calling .GetMachineName
I0120 11:27:33.360656  546654 main.go:141] libmachine: (functional-935944) Calling .GetState
I0120 11:27:33.362615  546654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 11:27:33.362662  546654 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 11:27:33.380563  546654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41009
I0120 11:27:33.381247  546654 main.go:141] libmachine: () Calling .GetVersion
I0120 11:27:33.381902  546654 main.go:141] libmachine: Using API Version  1
I0120 11:27:33.381927  546654 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 11:27:33.382259  546654 main.go:141] libmachine: () Calling .GetMachineName
I0120 11:27:33.382501  546654 main.go:141] libmachine: (functional-935944) Calling .DriverName
I0120 11:27:33.382738  546654 ssh_runner.go:195] Run: systemctl --version
I0120 11:27:33.382764  546654 main.go:141] libmachine: (functional-935944) Calling .GetSSHHostname
I0120 11:27:33.386140  546654 main.go:141] libmachine: (functional-935944) DBG | domain functional-935944 has defined MAC address 52:54:00:cb:d3:56 in network mk-functional-935944
I0120 11:27:33.386700  546654 main.go:141] libmachine: (functional-935944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:d3:56", ip: ""} in network mk-functional-935944: {Iface:virbr1 ExpiryTime:2025-01-20 12:24:25 +0000 UTC Type:0 Mac:52:54:00:cb:d3:56 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:functional-935944 Clientid:01:52:54:00:cb:d3:56}
I0120 11:27:33.386723  546654 main.go:141] libmachine: (functional-935944) DBG | domain functional-935944 has defined IP address 192.168.39.70 and MAC address 52:54:00:cb:d3:56 in network mk-functional-935944
I0120 11:27:33.386972  546654 main.go:141] libmachine: (functional-935944) Calling .GetSSHPort
I0120 11:27:33.387212  546654 main.go:141] libmachine: (functional-935944) Calling .GetSSHKeyPath
I0120 11:27:33.387418  546654 main.go:141] libmachine: (functional-935944) Calling .GetSSHUsername
I0120 11:27:33.387623  546654 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/functional-935944/id_rsa Username:docker}
I0120 11:27:33.472574  546654 build_images.go:161] Building image from path: /tmp/build.743605042.tar
I0120 11:27:33.472661  546654 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0120 11:27:33.483840  546654 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.743605042.tar
I0120 11:27:33.488971  546654 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.743605042.tar: stat -c "%s %y" /var/lib/minikube/build/build.743605042.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.743605042.tar': No such file or directory
I0120 11:27:33.489010  546654 ssh_runner.go:362] scp /tmp/build.743605042.tar --> /var/lib/minikube/build/build.743605042.tar (3072 bytes)
I0120 11:27:33.528995  546654 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.743605042
I0120 11:27:33.541816  546654 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.743605042 -xf /var/lib/minikube/build/build.743605042.tar
I0120 11:27:33.555728  546654 containerd.go:394] Building image: /var/lib/minikube/build/build.743605042
I0120 11:27:33.555839  546654 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.743605042 --local dockerfile=/var/lib/minikube/build/build.743605042 --output type=image,name=localhost/my-image:functional-935944
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B 0.0s done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.5s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.3s done
#8 exporting manifest sha256:ca2eef5f289a0d96fd45baf1e62b7defa52ba379bd89fa9c755ea895622aabb3
#8 exporting manifest sha256:ca2eef5f289a0d96fd45baf1e62b7defa52ba379bd89fa9c755ea895622aabb3 0.0s done
#8 exporting config sha256:7c33d3a5f6d02519f42f82c53f0331f4252b1e7548a5b8254d4faf9cbe252428 0.1s done
#8 naming to localhost/my-image:functional-935944 done
#8 DONE 0.4s
I0120 11:27:37.171510  546654 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.743605042 --local dockerfile=/var/lib/minikube/build/build.743605042 --output type=image,name=localhost/my-image:functional-935944: (3.615625775s)
I0120 11:27:37.171572  546654 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.743605042
I0120 11:27:37.193251  546654 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.743605042.tar
I0120 11:27:37.212373  546654 build_images.go:217] Built localhost/my-image:functional-935944 from /tmp/build.743605042.tar
I0120 11:27:37.212413  546654 build_images.go:133] succeeded building to: functional-935944
I0120 11:27:37.212419  546654 build_images.go:134] failed building to: 
I0120 11:27:37.212449  546654 main.go:141] libmachine: Making call to close driver server
I0120 11:27:37.212470  546654 main.go:141] libmachine: (functional-935944) Calling .Close
I0120 11:27:37.212791  546654 main.go:141] libmachine: Successfully made call to close driver server
I0120 11:27:37.212813  546654 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 11:27:37.212823  546654 main.go:141] libmachine: Making call to close driver server
I0120 11:27:37.212832  546654 main.go:141] libmachine: (functional-935944) Calling .Close
I0120 11:27:37.214471  546654 main.go:141] libmachine: (functional-935944) DBG | Closing plugin on server side
I0120 11:27:37.214487  546654 main.go:141] libmachine: Successfully made call to close driver server
I0120 11:27:37.214501  546654 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.96490794s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-935944
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 image load --daemon kicbase/echo-server:functional-935944 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-935944 image load --daemon kicbase/echo-server:functional-935944 --alsologtostderr: (1.237999516s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 image load --daemon kicbase/echo-server:functional-935944 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p functional-935944 image load --daemon kicbase/echo-server:functional-935944 --alsologtostderr: (1.60445696s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-935944
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 image load --daemon kicbase/echo-server:functional-935944 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-amd64 -p functional-935944 image load --daemon kicbase/echo-server:functional-935944 --alsologtostderr: (1.096277992s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 image save kicbase/echo-server:functional-935944 /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 image rm kicbase/echo-server:functional-935944 --alsologtostderr
E0120 11:27:13.287976  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 image ls
I0120 11:27:13.387703  537581 retry.go:31] will retry after 1.93781269s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:2dbff9ad-886d-412d-95a7-34999130b8a9 ResourceVersion:746 Generation:0 CreationTimestamp:2025-01-20 11:27:13 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001afe130 VolumeMode:0xc001afe140 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-935944
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 image save --daemon kicbase/echo-server:functional-935944 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-935944
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-935944 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-935944 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-dqblp" [755a891a-203e-4882-88cc-e05976b00075] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-dqblp" [755a891a-203e-4882-88cc-e05976b00075] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.004977574s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 service list -o json
functional_test.go:1494: Took "460.639504ms" to run "out/minikube-linux-amd64 -p functional-935944 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.70:32448
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "314.600401ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "51.886532ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.70:32448
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "333.639954ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "52.230838ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-935944 /tmp/TestFunctionalparallelMountCmdany-port1977466483/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1737372449300756476" to /tmp/TestFunctionalparallelMountCmdany-port1977466483/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1737372449300756476" to /tmp/TestFunctionalparallelMountCmdany-port1977466483/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1737372449300756476" to /tmp/TestFunctionalparallelMountCmdany-port1977466483/001/test-1737372449300756476
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-935944 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (225.907453ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0120 11:27:29.526920  537581 retry.go:31] will retry after 648.097161ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 20 11:27 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 20 11:27 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 20 11:27 test-1737372449300756476
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 ssh cat /mount-9p/test-1737372449300756476
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-935944 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [1e61de6a-fa94-4961-b396-2f37b4f34924] Pending
helpers_test.go:344: "busybox-mount" [1e61de6a-fa94-4961-b396-2f37b4f34924] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [1e61de6a-fa94-4961-b396-2f37b4f34924] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [1e61de6a-fa94-4961-b396-2f37b4f34924] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.00438053s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-935944 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-935944 /tmp/TestFunctionalparallelMountCmdany-port1977466483/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.86s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-935944 /tmp/TestFunctionalparallelMountCmdspecific-port4074144995/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-935944 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (245.293202ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0120 11:27:37.405096  537581 retry.go:31] will retry after 717.509786ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-935944 /tmp/TestFunctionalparallelMountCmdspecific-port4074144995/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-935944 ssh "sudo umount -f /mount-9p": exit status 1 (209.319643ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-935944 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-935944 /tmp/TestFunctionalparallelMountCmdspecific-port4074144995/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-935944 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3410922821/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-935944 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3410922821/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-935944 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3410922821/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-935944 ssh "findmnt -T" /mount1: exit status 1 (291.934414ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0120 11:27:39.475799  537581 retry.go:31] will retry after 506.694721ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-935944 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-935944 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-935944 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3410922821/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-935944 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3410922821/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-935944 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3410922821/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
2025/01/20 11:27:44 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.66s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-935944
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-935944
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-935944
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (194.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-845530 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0120 11:29:29.426509  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:29:57.130062  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-845530 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (3m13.466080846s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (194.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845530 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845530 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-845530 -- rollout status deployment/busybox: (4.809206341s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845530 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845530 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845530 -- exec busybox-58667487b6-ht4r5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845530 -- exec busybox-58667487b6-s8kpz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845530 -- exec busybox-58667487b6-wwxpm -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845530 -- exec busybox-58667487b6-ht4r5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845530 -- exec busybox-58667487b6-s8kpz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845530 -- exec busybox-58667487b6-wwxpm -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845530 -- exec busybox-58667487b6-ht4r5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845530 -- exec busybox-58667487b6-s8kpz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845530 -- exec busybox-58667487b6-wwxpm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845530 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845530 -- exec busybox-58667487b6-ht4r5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845530 -- exec busybox-58667487b6-ht4r5 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845530 -- exec busybox-58667487b6-s8kpz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845530 -- exec busybox-58667487b6-s8kpz -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845530 -- exec busybox-58667487b6-wwxpm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845530 -- exec busybox-58667487b6-wwxpm -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-845530 -v=7 --alsologtostderr
E0120 11:32:05.510217  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/functional-935944/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:32:05.516586  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/functional-935944/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:32:05.527982  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/functional-935944/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:32:05.549395  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/functional-935944/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:32:05.590910  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/functional-935944/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:32:05.672832  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/functional-935944/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:32:05.835070  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/functional-935944/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:32:06.156856  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/functional-935944/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:32:06.798773  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/functional-935944/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:32:08.080383  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/functional-935944/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-845530 -v=7 --alsologtostderr: (56.362341248s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 status -v=7 --alsologtostderr
E0120 11:32:10.641934  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/functional-935944/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (57.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-845530 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 cp testdata/cp-test.txt ha-845530:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 ssh -n ha-845530 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 cp ha-845530:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1438738850/001/cp-test_ha-845530.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 ssh -n ha-845530 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 cp ha-845530:/home/docker/cp-test.txt ha-845530-m02:/home/docker/cp-test_ha-845530_ha-845530-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 ssh -n ha-845530 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 ssh -n ha-845530-m02 "sudo cat /home/docker/cp-test_ha-845530_ha-845530-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 cp ha-845530:/home/docker/cp-test.txt ha-845530-m03:/home/docker/cp-test_ha-845530_ha-845530-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 ssh -n ha-845530 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 ssh -n ha-845530-m03 "sudo cat /home/docker/cp-test_ha-845530_ha-845530-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 cp ha-845530:/home/docker/cp-test.txt ha-845530-m04:/home/docker/cp-test_ha-845530_ha-845530-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 ssh -n ha-845530 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 ssh -n ha-845530-m04 "sudo cat /home/docker/cp-test_ha-845530_ha-845530-m04.txt"
E0120 11:32:15.763755  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/functional-935944/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 cp testdata/cp-test.txt ha-845530-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 ssh -n ha-845530-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 cp ha-845530-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1438738850/001/cp-test_ha-845530-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 ssh -n ha-845530-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 cp ha-845530-m02:/home/docker/cp-test.txt ha-845530:/home/docker/cp-test_ha-845530-m02_ha-845530.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 ssh -n ha-845530-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 ssh -n ha-845530 "sudo cat /home/docker/cp-test_ha-845530-m02_ha-845530.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 cp ha-845530-m02:/home/docker/cp-test.txt ha-845530-m03:/home/docker/cp-test_ha-845530-m02_ha-845530-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 ssh -n ha-845530-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 ssh -n ha-845530-m03 "sudo cat /home/docker/cp-test_ha-845530-m02_ha-845530-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 cp ha-845530-m02:/home/docker/cp-test.txt ha-845530-m04:/home/docker/cp-test_ha-845530-m02_ha-845530-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 ssh -n ha-845530-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 ssh -n ha-845530-m04 "sudo cat /home/docker/cp-test_ha-845530-m02_ha-845530-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 cp testdata/cp-test.txt ha-845530-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 ssh -n ha-845530-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 cp ha-845530-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1438738850/001/cp-test_ha-845530-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 ssh -n ha-845530-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 cp ha-845530-m03:/home/docker/cp-test.txt ha-845530:/home/docker/cp-test_ha-845530-m03_ha-845530.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 ssh -n ha-845530-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 ssh -n ha-845530 "sudo cat /home/docker/cp-test_ha-845530-m03_ha-845530.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 cp ha-845530-m03:/home/docker/cp-test.txt ha-845530-m02:/home/docker/cp-test_ha-845530-m03_ha-845530-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 ssh -n ha-845530-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 ssh -n ha-845530-m02 "sudo cat /home/docker/cp-test_ha-845530-m03_ha-845530-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 cp ha-845530-m03:/home/docker/cp-test.txt ha-845530-m04:/home/docker/cp-test_ha-845530-m03_ha-845530-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 ssh -n ha-845530-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 ssh -n ha-845530-m04 "sudo cat /home/docker/cp-test_ha-845530-m03_ha-845530-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 cp testdata/cp-test.txt ha-845530-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 ssh -n ha-845530-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 cp ha-845530-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1438738850/001/cp-test_ha-845530-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 ssh -n ha-845530-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 cp ha-845530-m04:/home/docker/cp-test.txt ha-845530:/home/docker/cp-test_ha-845530-m04_ha-845530.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 ssh -n ha-845530-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 ssh -n ha-845530 "sudo cat /home/docker/cp-test_ha-845530-m04_ha-845530.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 cp ha-845530-m04:/home/docker/cp-test.txt ha-845530-m02:/home/docker/cp-test_ha-845530-m04_ha-845530-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 ssh -n ha-845530-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 ssh -n ha-845530-m02 "sudo cat /home/docker/cp-test_ha-845530-m04_ha-845530-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 cp ha-845530-m04:/home/docker/cp-test.txt ha-845530-m03:/home/docker/cp-test_ha-845530-m04_ha-845530-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 ssh -n ha-845530-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 ssh -n ha-845530-m03 "sudo cat /home/docker/cp-test_ha-845530-m04_ha-845530-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 node stop m02 -v=7 --alsologtostderr
E0120 11:32:26.005275  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/functional-935944/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:32:46.487019  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/functional-935944/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:33:27.448545  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/functional-935944/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-845530 node stop m02 -v=7 --alsologtostderr: (1m30.995490213s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-845530 status -v=7 --alsologtostderr: exit status 7 (671.610548ms)

                                                
                                                
-- stdout --
	ha-845530
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-845530-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-845530-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-845530-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 11:33:56.056660  551817 out.go:345] Setting OutFile to fd 1 ...
	I0120 11:33:56.056785  551817 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:33:56.056796  551817 out.go:358] Setting ErrFile to fd 2...
	I0120 11:33:56.056802  551817 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:33:56.057020  551817 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-530330/.minikube/bin
	I0120 11:33:56.057257  551817 out.go:352] Setting JSON to false
	I0120 11:33:56.057295  551817 mustload.go:65] Loading cluster: ha-845530
	I0120 11:33:56.057396  551817 notify.go:220] Checking for updates...
	I0120 11:33:56.057715  551817 config.go:182] Loaded profile config "ha-845530": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 11:33:56.057738  551817 status.go:174] checking status of ha-845530 ...
	I0120 11:33:56.058198  551817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 11:33:56.058250  551817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:33:56.078368  551817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40179
	I0120 11:33:56.078932  551817 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:33:56.079576  551817 main.go:141] libmachine: Using API Version  1
	I0120 11:33:56.079598  551817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:33:56.079961  551817 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:33:56.080227  551817 main.go:141] libmachine: (ha-845530) Calling .GetState
	I0120 11:33:56.081822  551817 status.go:371] ha-845530 host status = "Running" (err=<nil>)
	I0120 11:33:56.081845  551817 host.go:66] Checking if "ha-845530" exists ...
	I0120 11:33:56.082177  551817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 11:33:56.082220  551817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:33:56.097389  551817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34129
	I0120 11:33:56.097891  551817 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:33:56.098435  551817 main.go:141] libmachine: Using API Version  1
	I0120 11:33:56.098463  551817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:33:56.098791  551817 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:33:56.099049  551817 main.go:141] libmachine: (ha-845530) Calling .GetIP
	I0120 11:33:56.102131  551817 main.go:141] libmachine: (ha-845530) DBG | domain ha-845530 has defined MAC address 52:54:00:08:29:76 in network mk-ha-845530
	I0120 11:33:56.102699  551817 main.go:141] libmachine: (ha-845530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:29:76", ip: ""} in network mk-ha-845530: {Iface:virbr1 ExpiryTime:2025-01-20 12:28:06 +0000 UTC Type:0 Mac:52:54:00:08:29:76 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-845530 Clientid:01:52:54:00:08:29:76}
	I0120 11:33:56.102741  551817 main.go:141] libmachine: (ha-845530) DBG | domain ha-845530 has defined IP address 192.168.39.217 and MAC address 52:54:00:08:29:76 in network mk-ha-845530
	I0120 11:33:56.102987  551817 host.go:66] Checking if "ha-845530" exists ...
	I0120 11:33:56.103464  551817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 11:33:56.103534  551817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:33:56.120397  551817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43187
	I0120 11:33:56.120955  551817 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:33:56.121543  551817 main.go:141] libmachine: Using API Version  1
	I0120 11:33:56.121568  551817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:33:56.121964  551817 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:33:56.122189  551817 main.go:141] libmachine: (ha-845530) Calling .DriverName
	I0120 11:33:56.122452  551817 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 11:33:56.122480  551817 main.go:141] libmachine: (ha-845530) Calling .GetSSHHostname
	I0120 11:33:56.125305  551817 main.go:141] libmachine: (ha-845530) DBG | domain ha-845530 has defined MAC address 52:54:00:08:29:76 in network mk-ha-845530
	I0120 11:33:56.125775  551817 main.go:141] libmachine: (ha-845530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:29:76", ip: ""} in network mk-ha-845530: {Iface:virbr1 ExpiryTime:2025-01-20 12:28:06 +0000 UTC Type:0 Mac:52:54:00:08:29:76 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-845530 Clientid:01:52:54:00:08:29:76}
	I0120 11:33:56.125828  551817 main.go:141] libmachine: (ha-845530) DBG | domain ha-845530 has defined IP address 192.168.39.217 and MAC address 52:54:00:08:29:76 in network mk-ha-845530
	I0120 11:33:56.126024  551817 main.go:141] libmachine: (ha-845530) Calling .GetSSHPort
	I0120 11:33:56.126246  551817 main.go:141] libmachine: (ha-845530) Calling .GetSSHKeyPath
	I0120 11:33:56.126428  551817 main.go:141] libmachine: (ha-845530) Calling .GetSSHUsername
	I0120 11:33:56.126596  551817 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/ha-845530/id_rsa Username:docker}
	I0120 11:33:56.211372  551817 ssh_runner.go:195] Run: systemctl --version
	I0120 11:33:56.218979  551817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 11:33:56.235870  551817 kubeconfig.go:125] found "ha-845530" server: "https://192.168.39.254:8443"
	I0120 11:33:56.235914  551817 api_server.go:166] Checking apiserver status ...
	I0120 11:33:56.235974  551817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 11:33:56.252118  551817 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1166/cgroup
	W0120 11:33:56.267681  551817 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1166/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0120 11:33:56.267750  551817 ssh_runner.go:195] Run: ls
	I0120 11:33:56.272558  551817 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0120 11:33:56.277587  551817 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0120 11:33:56.277615  551817 status.go:463] ha-845530 apiserver status = Running (err=<nil>)
	I0120 11:33:56.277629  551817 status.go:176] ha-845530 status: &{Name:ha-845530 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 11:33:56.277656  551817 status.go:174] checking status of ha-845530-m02 ...
	I0120 11:33:56.278007  551817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 11:33:56.278055  551817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:33:56.293683  551817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40229
	I0120 11:33:56.294238  551817 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:33:56.294802  551817 main.go:141] libmachine: Using API Version  1
	I0120 11:33:56.294831  551817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:33:56.295136  551817 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:33:56.295386  551817 main.go:141] libmachine: (ha-845530-m02) Calling .GetState
	I0120 11:33:56.296979  551817 status.go:371] ha-845530-m02 host status = "Stopped" (err=<nil>)
	I0120 11:33:56.296997  551817 status.go:384] host is not running, skipping remaining checks
	I0120 11:33:56.297004  551817 status.go:176] ha-845530-m02 status: &{Name:ha-845530-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 11:33:56.297028  551817 status.go:174] checking status of ha-845530-m03 ...
	I0120 11:33:56.297327  551817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 11:33:56.297397  551817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:33:56.312987  551817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35543
	I0120 11:33:56.313435  551817 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:33:56.313961  551817 main.go:141] libmachine: Using API Version  1
	I0120 11:33:56.313989  551817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:33:56.314276  551817 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:33:56.314519  551817 main.go:141] libmachine: (ha-845530-m03) Calling .GetState
	I0120 11:33:56.316219  551817 status.go:371] ha-845530-m03 host status = "Running" (err=<nil>)
	I0120 11:33:56.316241  551817 host.go:66] Checking if "ha-845530-m03" exists ...
	I0120 11:33:56.316610  551817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 11:33:56.316664  551817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:33:56.334237  551817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34879
	I0120 11:33:56.334696  551817 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:33:56.335266  551817 main.go:141] libmachine: Using API Version  1
	I0120 11:33:56.335294  551817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:33:56.335702  551817 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:33:56.335934  551817 main.go:141] libmachine: (ha-845530-m03) Calling .GetIP
	I0120 11:33:56.339235  551817 main.go:141] libmachine: (ha-845530-m03) DBG | domain ha-845530-m03 has defined MAC address 52:54:00:1c:06:f8 in network mk-ha-845530
	I0120 11:33:56.339774  551817 main.go:141] libmachine: (ha-845530-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:06:f8", ip: ""} in network mk-ha-845530: {Iface:virbr1 ExpiryTime:2025-01-20 12:30:05 +0000 UTC Type:0 Mac:52:54:00:1c:06:f8 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-845530-m03 Clientid:01:52:54:00:1c:06:f8}
	I0120 11:33:56.339810  551817 main.go:141] libmachine: (ha-845530-m03) DBG | domain ha-845530-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:1c:06:f8 in network mk-ha-845530
	I0120 11:33:56.339963  551817 host.go:66] Checking if "ha-845530-m03" exists ...
	I0120 11:33:56.340302  551817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 11:33:56.340346  551817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:33:56.356638  551817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32987
	I0120 11:33:56.357197  551817 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:33:56.357728  551817 main.go:141] libmachine: Using API Version  1
	I0120 11:33:56.357751  551817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:33:56.358132  551817 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:33:56.358347  551817 main.go:141] libmachine: (ha-845530-m03) Calling .DriverName
	I0120 11:33:56.358543  551817 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 11:33:56.358569  551817 main.go:141] libmachine: (ha-845530-m03) Calling .GetSSHHostname
	I0120 11:33:56.361289  551817 main.go:141] libmachine: (ha-845530-m03) DBG | domain ha-845530-m03 has defined MAC address 52:54:00:1c:06:f8 in network mk-ha-845530
	I0120 11:33:56.361710  551817 main.go:141] libmachine: (ha-845530-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:06:f8", ip: ""} in network mk-ha-845530: {Iface:virbr1 ExpiryTime:2025-01-20 12:30:05 +0000 UTC Type:0 Mac:52:54:00:1c:06:f8 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-845530-m03 Clientid:01:52:54:00:1c:06:f8}
	I0120 11:33:56.361735  551817 main.go:141] libmachine: (ha-845530-m03) DBG | domain ha-845530-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:1c:06:f8 in network mk-ha-845530
	I0120 11:33:56.361921  551817 main.go:141] libmachine: (ha-845530-m03) Calling .GetSSHPort
	I0120 11:33:56.362109  551817 main.go:141] libmachine: (ha-845530-m03) Calling .GetSSHKeyPath
	I0120 11:33:56.362272  551817 main.go:141] libmachine: (ha-845530-m03) Calling .GetSSHUsername
	I0120 11:33:56.362437  551817 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/ha-845530-m03/id_rsa Username:docker}
	I0120 11:33:56.447196  551817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 11:33:56.466784  551817 kubeconfig.go:125] found "ha-845530" server: "https://192.168.39.254:8443"
	I0120 11:33:56.466837  551817 api_server.go:166] Checking apiserver status ...
	I0120 11:33:56.466888  551817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 11:33:56.485228  551817 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1170/cgroup
	W0120 11:33:56.497274  551817 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1170/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0120 11:33:56.497342  551817 ssh_runner.go:195] Run: ls
	I0120 11:33:56.502126  551817 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0120 11:33:56.507113  551817 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0120 11:33:56.507141  551817 status.go:463] ha-845530-m03 apiserver status = Running (err=<nil>)
	I0120 11:33:56.507150  551817 status.go:176] ha-845530-m03 status: &{Name:ha-845530-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 11:33:56.507166  551817 status.go:174] checking status of ha-845530-m04 ...
	I0120 11:33:56.507559  551817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 11:33:56.507605  551817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:33:56.523004  551817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32833
	I0120 11:33:56.523535  551817 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:33:56.524128  551817 main.go:141] libmachine: Using API Version  1
	I0120 11:33:56.524163  551817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:33:56.524558  551817 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:33:56.524826  551817 main.go:141] libmachine: (ha-845530-m04) Calling .GetState
	I0120 11:33:56.526519  551817 status.go:371] ha-845530-m04 host status = "Running" (err=<nil>)
	I0120 11:33:56.526539  551817 host.go:66] Checking if "ha-845530-m04" exists ...
	I0120 11:33:56.526841  551817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 11:33:56.526880  551817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:33:56.543900  551817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38297
	I0120 11:33:56.544326  551817 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:33:56.544825  551817 main.go:141] libmachine: Using API Version  1
	I0120 11:33:56.544851  551817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:33:56.545189  551817 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:33:56.545396  551817 main.go:141] libmachine: (ha-845530-m04) Calling .GetIP
	I0120 11:33:56.548069  551817 main.go:141] libmachine: (ha-845530-m04) DBG | domain ha-845530-m04 has defined MAC address 52:54:00:73:37:ef in network mk-ha-845530
	I0120 11:33:56.548456  551817 main.go:141] libmachine: (ha-845530-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:37:ef", ip: ""} in network mk-ha-845530: {Iface:virbr1 ExpiryTime:2025-01-20 12:31:29 +0000 UTC Type:0 Mac:52:54:00:73:37:ef Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ha-845530-m04 Clientid:01:52:54:00:73:37:ef}
	I0120 11:33:56.548485  551817 main.go:141] libmachine: (ha-845530-m04) DBG | domain ha-845530-m04 has defined IP address 192.168.39.247 and MAC address 52:54:00:73:37:ef in network mk-ha-845530
	I0120 11:33:56.548666  551817 host.go:66] Checking if "ha-845530-m04" exists ...
	I0120 11:33:56.548968  551817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 11:33:56.549007  551817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:33:56.565109  551817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46143
	I0120 11:33:56.565711  551817 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:33:56.566260  551817 main.go:141] libmachine: Using API Version  1
	I0120 11:33:56.566292  551817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:33:56.566692  551817 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:33:56.566936  551817 main.go:141] libmachine: (ha-845530-m04) Calling .DriverName
	I0120 11:33:56.567177  551817 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 11:33:56.567208  551817 main.go:141] libmachine: (ha-845530-m04) Calling .GetSSHHostname
	I0120 11:33:56.570398  551817 main.go:141] libmachine: (ha-845530-m04) DBG | domain ha-845530-m04 has defined MAC address 52:54:00:73:37:ef in network mk-ha-845530
	I0120 11:33:56.570898  551817 main.go:141] libmachine: (ha-845530-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:37:ef", ip: ""} in network mk-ha-845530: {Iface:virbr1 ExpiryTime:2025-01-20 12:31:29 +0000 UTC Type:0 Mac:52:54:00:73:37:ef Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ha-845530-m04 Clientid:01:52:54:00:73:37:ef}
	I0120 11:33:56.570927  551817 main.go:141] libmachine: (ha-845530-m04) DBG | domain ha-845530-m04 has defined IP address 192.168.39.247 and MAC address 52:54:00:73:37:ef in network mk-ha-845530
	I0120 11:33:56.571234  551817 main.go:141] libmachine: (ha-845530-m04) Calling .GetSSHPort
	I0120 11:33:56.571457  551817 main.go:141] libmachine: (ha-845530-m04) Calling .GetSSHKeyPath
	I0120 11:33:56.571636  551817 main.go:141] libmachine: (ha-845530-m04) Calling .GetSSHUsername
	I0120 11:33:56.571748  551817 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/ha-845530-m04/id_rsa Username:docker}
	I0120 11:33:56.654841  551817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 11:33:56.674444  551817 status.go:176] ha-845530-m04 status: &{Name:ha-845530-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (43.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 node start m02 -v=7 --alsologtostderr
E0120 11:34:29.425989  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-845530 node start m02 -v=7 --alsologtostderr: (42.932433701s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (43.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (472.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-845530 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-845530 -v=7 --alsologtostderr
E0120 11:34:49.370708  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/functional-935944/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:37:05.510224  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/functional-935944/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:37:33.212451  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/functional-935944/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-845530 -v=7 --alsologtostderr: (4m34.166454948s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-845530 --wait=true -v=7 --alsologtostderr
E0120 11:39:29.425919  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:40:52.492097  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:42:05.510055  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/functional-935944/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-845530 --wait=true -v=7 --alsologtostderr: (3m17.883690264s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-845530
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (472.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-845530 node delete m03 -v=7 --alsologtostderr: (6.296068525s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (183.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 stop -v=7 --alsologtostderr
E0120 11:44:29.425635  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-845530 stop -v=7 --alsologtostderr: (3m3.28809313s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-845530 status -v=7 --alsologtostderr: exit status 7 (113.035894ms)

                                                
                                                
-- stdout --
	ha-845530
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-845530-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-845530-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 11:45:45.319832  555552 out.go:345] Setting OutFile to fd 1 ...
	I0120 11:45:45.320334  555552 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:45:45.320355  555552 out.go:358] Setting ErrFile to fd 2...
	I0120 11:45:45.320363  555552 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:45:45.320777  555552 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-530330/.minikube/bin
	I0120 11:45:45.321055  555552 out.go:352] Setting JSON to false
	I0120 11:45:45.321094  555552 mustload.go:65] Loading cluster: ha-845530
	I0120 11:45:45.321246  555552 notify.go:220] Checking for updates...
	I0120 11:45:45.321793  555552 config.go:182] Loaded profile config "ha-845530": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 11:45:45.321835  555552 status.go:174] checking status of ha-845530 ...
	I0120 11:45:45.322243  555552 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 11:45:45.322293  555552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:45:45.345163  555552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33453
	I0120 11:45:45.345613  555552 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:45:45.346252  555552 main.go:141] libmachine: Using API Version  1
	I0120 11:45:45.346273  555552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:45:45.346659  555552 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:45:45.346851  555552 main.go:141] libmachine: (ha-845530) Calling .GetState
	I0120 11:45:45.348447  555552 status.go:371] ha-845530 host status = "Stopped" (err=<nil>)
	I0120 11:45:45.348464  555552 status.go:384] host is not running, skipping remaining checks
	I0120 11:45:45.348471  555552 status.go:176] ha-845530 status: &{Name:ha-845530 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 11:45:45.348509  555552 status.go:174] checking status of ha-845530-m02 ...
	I0120 11:45:45.348807  555552 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 11:45:45.348851  555552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:45:45.363480  555552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38049
	I0120 11:45:45.363917  555552 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:45:45.364383  555552 main.go:141] libmachine: Using API Version  1
	I0120 11:45:45.364402  555552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:45:45.364697  555552 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:45:45.364879  555552 main.go:141] libmachine: (ha-845530-m02) Calling .GetState
	I0120 11:45:45.366379  555552 status.go:371] ha-845530-m02 host status = "Stopped" (err=<nil>)
	I0120 11:45:45.366392  555552 status.go:384] host is not running, skipping remaining checks
	I0120 11:45:45.366400  555552 status.go:176] ha-845530-m02 status: &{Name:ha-845530-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 11:45:45.366421  555552 status.go:174] checking status of ha-845530-m04 ...
	I0120 11:45:45.366709  555552 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 11:45:45.366755  555552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:45:45.381163  555552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39259
	I0120 11:45:45.381607  555552 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:45:45.382213  555552 main.go:141] libmachine: Using API Version  1
	I0120 11:45:45.382245  555552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:45:45.382548  555552 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:45:45.382735  555552 main.go:141] libmachine: (ha-845530-m04) Calling .GetState
	I0120 11:45:45.384118  555552 status.go:371] ha-845530-m04 host status = "Stopped" (err=<nil>)
	I0120 11:45:45.384133  555552 status.go:384] host is not running, skipping remaining checks
	I0120 11:45:45.384140  555552 status.go:176] ha-845530-m04 status: &{Name:ha-845530-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (183.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (165.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-845530 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0120 11:47:05.509736  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/functional-935944/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:48:28.574284  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/functional-935944/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-845530 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m44.89785792s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (165.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (72.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-845530 --control-plane -v=7 --alsologtostderr
E0120 11:49:29.425954  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-845530 --control-plane -v=7 --alsologtostderr: (1m11.875690629s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-845530 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (72.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                    
x
+
TestJSONOutput/start/Command (83.14s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-212911 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-212911 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m23.136352761s)
--- PASS: TestJSONOutput/start/Command (83.14s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-212911 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-212911 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.61s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-212911 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-212911 --output=json --user=testUser: (6.609770536s)
--- PASS: TestJSONOutput/stop/Command (6.61s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-615671 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-615671 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (66.489392ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8b79d3a1-1539-4301-a01b-e9948a7183b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-615671] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"92c3a26a-f0f9-4fc2-9e56-c417eb58c28f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20151"}}
	{"specversion":"1.0","id":"35217ebe-a5ac-44f0-bfc7-c5a833e17b14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d0e345d0-9751-441b-bd9b-58819b08dee8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20151-530330/kubeconfig"}}
	{"specversion":"1.0","id":"74fbd3a0-54e5-45b9-b988-38af94ea2089","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-530330/.minikube"}}
	{"specversion":"1.0","id":"eb84e1f4-899e-46a5-a018-91bf5633934d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f8a815fa-ba53-4272-85ff-098f0a55c6bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9ce38b0c-29e7-4182-9a2d-fdebec3cf282","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-615671" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-615671
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (96.48s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-756693 --driver=kvm2  --container-runtime=containerd
E0120 11:52:05.510111  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/functional-935944/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-756693 --driver=kvm2  --container-runtime=containerd: (47.686042278s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-769205 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-769205 --driver=kvm2  --container-runtime=containerd: (45.880335063s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-756693
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-769205
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-769205" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-769205
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-769205: (1.003504067s)
helpers_test.go:175: Cleaning up "first-756693" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-756693
--- PASS: TestMinikubeProfile (96.48s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.97s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-104204 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-104204 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.965855958s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-104204 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-104204 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.65s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-120476 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-120476 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (26.647614219s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-120476 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-120476 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-104204 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-120476 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-120476 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-120476
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-120476: (1.328977226s)
--- PASS: TestMountStart/serial/Stop (1.33s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.75s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-120476
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-120476: (23.748611s)
--- PASS: TestMountStart/serial/RestartStopped (24.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-120476 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-120476 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (112.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-152613 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0120 11:54:29.425614  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-152613 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m52.26684677s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (112.68s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-152613 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-152613 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-152613 -- rollout status deployment/busybox: (5.182653477s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-152613 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-152613 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-152613 -- exec busybox-58667487b6-69ztz -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-152613 -- exec busybox-58667487b6-clbpz -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-152613 -- exec busybox-58667487b6-69ztz -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-152613 -- exec busybox-58667487b6-clbpz -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-152613 -- exec busybox-58667487b6-69ztz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-152613 -- exec busybox-58667487b6-clbpz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.78s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-152613 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-152613 -- exec busybox-58667487b6-69ztz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-152613 -- exec busybox-58667487b6-69ztz -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-152613 -- exec busybox-58667487b6-clbpz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-152613 -- exec busybox-58667487b6-clbpz -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (51.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-152613 -v 3 --alsologtostderr
E0120 11:57:05.510049  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/functional-935944/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-152613 -v 3 --alsologtostderr: (51.397931691s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (51.97s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-152613 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.58s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 cp testdata/cp-test.txt multinode-152613:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 ssh -n multinode-152613 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 cp multinode-152613:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2627913253/001/cp-test_multinode-152613.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 ssh -n multinode-152613 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 cp multinode-152613:/home/docker/cp-test.txt multinode-152613-m02:/home/docker/cp-test_multinode-152613_multinode-152613-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 ssh -n multinode-152613 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 ssh -n multinode-152613-m02 "sudo cat /home/docker/cp-test_multinode-152613_multinode-152613-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 cp multinode-152613:/home/docker/cp-test.txt multinode-152613-m03:/home/docker/cp-test_multinode-152613_multinode-152613-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 ssh -n multinode-152613 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 ssh -n multinode-152613-m03 "sudo cat /home/docker/cp-test_multinode-152613_multinode-152613-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 cp testdata/cp-test.txt multinode-152613-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 ssh -n multinode-152613-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 cp multinode-152613-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2627913253/001/cp-test_multinode-152613-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 ssh -n multinode-152613-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 cp multinode-152613-m02:/home/docker/cp-test.txt multinode-152613:/home/docker/cp-test_multinode-152613-m02_multinode-152613.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 ssh -n multinode-152613-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 ssh -n multinode-152613 "sudo cat /home/docker/cp-test_multinode-152613-m02_multinode-152613.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 cp multinode-152613-m02:/home/docker/cp-test.txt multinode-152613-m03:/home/docker/cp-test_multinode-152613-m02_multinode-152613-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 ssh -n multinode-152613-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 ssh -n multinode-152613-m03 "sudo cat /home/docker/cp-test_multinode-152613-m02_multinode-152613-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 cp testdata/cp-test.txt multinode-152613-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 ssh -n multinode-152613-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 cp multinode-152613-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2627913253/001/cp-test_multinode-152613-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 ssh -n multinode-152613-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 cp multinode-152613-m03:/home/docker/cp-test.txt multinode-152613:/home/docker/cp-test_multinode-152613-m03_multinode-152613.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 ssh -n multinode-152613-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 ssh -n multinode-152613 "sudo cat /home/docker/cp-test_multinode-152613-m03_multinode-152613.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 cp multinode-152613-m03:/home/docker/cp-test.txt multinode-152613-m02:/home/docker/cp-test_multinode-152613-m03_multinode-152613-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 ssh -n multinode-152613-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 ssh -n multinode-152613-m02 "sudo cat /home/docker/cp-test_multinode-152613-m03_multinode-152613-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.40s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-152613 node stop m03: (1.439650066s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-152613 status: exit status 7 (444.267146ms)

                                                
                                                
-- stdout --
	multinode-152613
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-152613-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-152613-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-152613 status --alsologtostderr: exit status 7 (425.891533ms)

                                                
                                                
-- stdout --
	multinode-152613
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-152613-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-152613-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 11:57:24.751437  563411 out.go:345] Setting OutFile to fd 1 ...
	I0120 11:57:24.751699  563411 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:57:24.751710  563411 out.go:358] Setting ErrFile to fd 2...
	I0120 11:57:24.751715  563411 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:57:24.751878  563411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-530330/.minikube/bin
	I0120 11:57:24.752055  563411 out.go:352] Setting JSON to false
	I0120 11:57:24.752092  563411 mustload.go:65] Loading cluster: multinode-152613
	I0120 11:57:24.752137  563411 notify.go:220] Checking for updates...
	I0120 11:57:24.753019  563411 config.go:182] Loaded profile config "multinode-152613": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 11:57:24.753098  563411 status.go:174] checking status of multinode-152613 ...
	I0120 11:57:24.754192  563411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 11:57:24.754246  563411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:57:24.772548  563411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35759
	I0120 11:57:24.773061  563411 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:57:24.773784  563411 main.go:141] libmachine: Using API Version  1
	I0120 11:57:24.773833  563411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:57:24.774211  563411 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:57:24.774383  563411 main.go:141] libmachine: (multinode-152613) Calling .GetState
	I0120 11:57:24.775843  563411 status.go:371] multinode-152613 host status = "Running" (err=<nil>)
	I0120 11:57:24.775862  563411 host.go:66] Checking if "multinode-152613" exists ...
	I0120 11:57:24.776118  563411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 11:57:24.776159  563411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:57:24.791381  563411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42739
	I0120 11:57:24.791782  563411 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:57:24.792274  563411 main.go:141] libmachine: Using API Version  1
	I0120 11:57:24.792303  563411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:57:24.792624  563411 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:57:24.792817  563411 main.go:141] libmachine: (multinode-152613) Calling .GetIP
	I0120 11:57:24.795348  563411 main.go:141] libmachine: (multinode-152613) DBG | domain multinode-152613 has defined MAC address 52:54:00:89:69:94 in network mk-multinode-152613
	I0120 11:57:24.795755  563411 main.go:141] libmachine: (multinode-152613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:69:94", ip: ""} in network mk-multinode-152613: {Iface:virbr1 ExpiryTime:2025-01-20 12:54:38 +0000 UTC Type:0 Mac:52:54:00:89:69:94 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-152613 Clientid:01:52:54:00:89:69:94}
	I0120 11:57:24.795774  563411 main.go:141] libmachine: (multinode-152613) DBG | domain multinode-152613 has defined IP address 192.168.39.200 and MAC address 52:54:00:89:69:94 in network mk-multinode-152613
	I0120 11:57:24.795932  563411 host.go:66] Checking if "multinode-152613" exists ...
	I0120 11:57:24.796301  563411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 11:57:24.796354  563411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:57:24.811282  563411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36669
	I0120 11:57:24.811690  563411 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:57:24.812118  563411 main.go:141] libmachine: Using API Version  1
	I0120 11:57:24.812137  563411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:57:24.812416  563411 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:57:24.812607  563411 main.go:141] libmachine: (multinode-152613) Calling .DriverName
	I0120 11:57:24.812840  563411 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 11:57:24.812874  563411 main.go:141] libmachine: (multinode-152613) Calling .GetSSHHostname
	I0120 11:57:24.815164  563411 main.go:141] libmachine: (multinode-152613) DBG | domain multinode-152613 has defined MAC address 52:54:00:89:69:94 in network mk-multinode-152613
	I0120 11:57:24.815582  563411 main.go:141] libmachine: (multinode-152613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:69:94", ip: ""} in network mk-multinode-152613: {Iface:virbr1 ExpiryTime:2025-01-20 12:54:38 +0000 UTC Type:0 Mac:52:54:00:89:69:94 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-152613 Clientid:01:52:54:00:89:69:94}
	I0120 11:57:24.815613  563411 main.go:141] libmachine: (multinode-152613) DBG | domain multinode-152613 has defined IP address 192.168.39.200 and MAC address 52:54:00:89:69:94 in network mk-multinode-152613
	I0120 11:57:24.815673  563411 main.go:141] libmachine: (multinode-152613) Calling .GetSSHPort
	I0120 11:57:24.815857  563411 main.go:141] libmachine: (multinode-152613) Calling .GetSSHKeyPath
	I0120 11:57:24.815978  563411 main.go:141] libmachine: (multinode-152613) Calling .GetSSHUsername
	I0120 11:57:24.816121  563411 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/multinode-152613/id_rsa Username:docker}
	I0120 11:57:24.901828  563411 ssh_runner.go:195] Run: systemctl --version
	I0120 11:57:24.908358  563411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 11:57:24.923711  563411 kubeconfig.go:125] found "multinode-152613" server: "https://192.168.39.200:8443"
	I0120 11:57:24.923752  563411 api_server.go:166] Checking apiserver status ...
	I0120 11:57:24.923797  563411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 11:57:24.937317  563411 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1111/cgroup
	W0120 11:57:24.947495  563411 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1111/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0120 11:57:24.947557  563411 ssh_runner.go:195] Run: ls
	I0120 11:57:24.951960  563411 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0120 11:57:24.956391  563411 api_server.go:279] https://192.168.39.200:8443/healthz returned 200:
	ok
	I0120 11:57:24.956415  563411 status.go:463] multinode-152613 apiserver status = Running (err=<nil>)
	I0120 11:57:24.956425  563411 status.go:176] multinode-152613 status: &{Name:multinode-152613 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 11:57:24.956446  563411 status.go:174] checking status of multinode-152613-m02 ...
	I0120 11:57:24.956855  563411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 11:57:24.956924  563411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:57:24.972920  563411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36333
	I0120 11:57:24.973388  563411 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:57:24.973928  563411 main.go:141] libmachine: Using API Version  1
	I0120 11:57:24.973950  563411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:57:24.974303  563411 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:57:24.974498  563411 main.go:141] libmachine: (multinode-152613-m02) Calling .GetState
	I0120 11:57:24.976018  563411 status.go:371] multinode-152613-m02 host status = "Running" (err=<nil>)
	I0120 11:57:24.976033  563411 host.go:66] Checking if "multinode-152613-m02" exists ...
	I0120 11:57:24.976323  563411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 11:57:24.976358  563411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:57:24.991064  563411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35807
	I0120 11:57:24.991476  563411 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:57:24.991935  563411 main.go:141] libmachine: Using API Version  1
	I0120 11:57:24.991954  563411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:57:24.992266  563411 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:57:24.992459  563411 main.go:141] libmachine: (multinode-152613-m02) Calling .GetIP
	I0120 11:57:24.995389  563411 main.go:141] libmachine: (multinode-152613-m02) DBG | domain multinode-152613-m02 has defined MAC address 52:54:00:32:8e:e2 in network mk-multinode-152613
	I0120 11:57:24.995792  563411 main.go:141] libmachine: (multinode-152613-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:8e:e2", ip: ""} in network mk-multinode-152613: {Iface:virbr1 ExpiryTime:2025-01-20 12:55:39 +0000 UTC Type:0 Mac:52:54:00:32:8e:e2 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:multinode-152613-m02 Clientid:01:52:54:00:32:8e:e2}
	I0120 11:57:24.995831  563411 main.go:141] libmachine: (multinode-152613-m02) DBG | domain multinode-152613-m02 has defined IP address 192.168.39.114 and MAC address 52:54:00:32:8e:e2 in network mk-multinode-152613
	I0120 11:57:24.995980  563411 host.go:66] Checking if "multinode-152613-m02" exists ...
	I0120 11:57:24.996396  563411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 11:57:24.996436  563411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:57:25.011275  563411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39453
	I0120 11:57:25.011653  563411 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:57:25.012080  563411 main.go:141] libmachine: Using API Version  1
	I0120 11:57:25.012100  563411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:57:25.012433  563411 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:57:25.012613  563411 main.go:141] libmachine: (multinode-152613-m02) Calling .DriverName
	I0120 11:57:25.012790  563411 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 11:57:25.012813  563411 main.go:141] libmachine: (multinode-152613-m02) Calling .GetSSHHostname
	I0120 11:57:25.015352  563411 main.go:141] libmachine: (multinode-152613-m02) DBG | domain multinode-152613-m02 has defined MAC address 52:54:00:32:8e:e2 in network mk-multinode-152613
	I0120 11:57:25.015713  563411 main.go:141] libmachine: (multinode-152613-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:8e:e2", ip: ""} in network mk-multinode-152613: {Iface:virbr1 ExpiryTime:2025-01-20 12:55:39 +0000 UTC Type:0 Mac:52:54:00:32:8e:e2 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:multinode-152613-m02 Clientid:01:52:54:00:32:8e:e2}
	I0120 11:57:25.015740  563411 main.go:141] libmachine: (multinode-152613-m02) DBG | domain multinode-152613-m02 has defined IP address 192.168.39.114 and MAC address 52:54:00:32:8e:e2 in network mk-multinode-152613
	I0120 11:57:25.015851  563411 main.go:141] libmachine: (multinode-152613-m02) Calling .GetSSHPort
	I0120 11:57:25.016055  563411 main.go:141] libmachine: (multinode-152613-m02) Calling .GetSSHKeyPath
	I0120 11:57:25.016187  563411 main.go:141] libmachine: (multinode-152613-m02) Calling .GetSSHUsername
	I0120 11:57:25.016341  563411 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/multinode-152613-m02/id_rsa Username:docker}
	I0120 11:57:25.093588  563411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 11:57:25.107626  563411 status.go:176] multinode-152613-m02 status: &{Name:multinode-152613-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0120 11:57:25.107666  563411 status.go:174] checking status of multinode-152613-m03 ...
	I0120 11:57:25.108017  563411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 11:57:25.108071  563411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:57:25.124527  563411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41459
	I0120 11:57:25.124941  563411 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:57:25.125416  563411 main.go:141] libmachine: Using API Version  1
	I0120 11:57:25.125438  563411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:57:25.125761  563411 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:57:25.125981  563411 main.go:141] libmachine: (multinode-152613-m03) Calling .GetState
	I0120 11:57:25.127539  563411 status.go:371] multinode-152613-m03 host status = "Stopped" (err=<nil>)
	I0120 11:57:25.127553  563411 status.go:384] host is not running, skipping remaining checks
	I0120 11:57:25.127558  563411 status.go:176] multinode-152613-m03 status: &{Name:multinode-152613-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.31s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (36.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 node start m03 -v=7 --alsologtostderr
E0120 11:57:32.493988  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-152613 node start m03 -v=7 --alsologtostderr: (35.652692029s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (36.29s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (332.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-152613
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-152613
E0120 11:59:29.428404  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-152613: (3m3.297914094s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-152613 --wait=true -v=8 --alsologtostderr
E0120 12:02:05.509828  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/functional-935944/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-152613 --wait=true -v=8 --alsologtostderr: (2m28.728095898s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-152613
--- PASS: TestMultiNode/serial/RestartKeepsNodes (332.13s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-152613 node delete m03: (1.705184204s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (181.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 stop
E0120 12:04:29.428799  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:05:08.576715  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/functional-935944/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-152613 stop: (3m1.698354071s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-152613 status: exit status 7 (95.576549ms)

                                                
                                                
-- stdout --
	multinode-152613
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-152613-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-152613 status --alsologtostderr: exit status 7 (88.734574ms)

                                                
                                                
-- stdout --
	multinode-152613
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-152613-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 12:06:37.657682  566189 out.go:345] Setting OutFile to fd 1 ...
	I0120 12:06:37.657831  566189 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:06:37.657842  566189 out.go:358] Setting ErrFile to fd 2...
	I0120 12:06:37.657847  566189 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:06:37.658053  566189 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-530330/.minikube/bin
	I0120 12:06:37.658276  566189 out.go:352] Setting JSON to false
	I0120 12:06:37.658313  566189 mustload.go:65] Loading cluster: multinode-152613
	I0120 12:06:37.658422  566189 notify.go:220] Checking for updates...
	I0120 12:06:37.658818  566189 config.go:182] Loaded profile config "multinode-152613": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 12:06:37.658843  566189 status.go:174] checking status of multinode-152613 ...
	I0120 12:06:37.659290  566189 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:06:37.659343  566189 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:06:37.675083  566189 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42211
	I0120 12:06:37.675546  566189 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:06:37.676178  566189 main.go:141] libmachine: Using API Version  1
	I0120 12:06:37.676228  566189 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:06:37.676571  566189 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:06:37.676773  566189 main.go:141] libmachine: (multinode-152613) Calling .GetState
	I0120 12:06:37.678492  566189 status.go:371] multinode-152613 host status = "Stopped" (err=<nil>)
	I0120 12:06:37.678509  566189 status.go:384] host is not running, skipping remaining checks
	I0120 12:06:37.678514  566189 status.go:176] multinode-152613 status: &{Name:multinode-152613 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 12:06:37.678537  566189 status.go:174] checking status of multinode-152613-m02 ...
	I0120 12:06:37.678859  566189 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:06:37.678905  566189 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:06:37.694090  566189 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38819
	I0120 12:06:37.694604  566189 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:06:37.695114  566189 main.go:141] libmachine: Using API Version  1
	I0120 12:06:37.695135  566189 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:06:37.695415  566189 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:06:37.695587  566189 main.go:141] libmachine: (multinode-152613-m02) Calling .GetState
	I0120 12:06:37.696844  566189 status.go:371] multinode-152613-m02 host status = "Stopped" (err=<nil>)
	I0120 12:06:37.696855  566189 status.go:384] host is not running, skipping remaining checks
	I0120 12:06:37.696861  566189 status.go:176] multinode-152613-m02 status: &{Name:multinode-152613-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (181.88s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (94.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-152613 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0120 12:07:05.510192  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/functional-935944/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-152613 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m33.833294156s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-152613 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (94.38s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-152613
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-152613-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-152613-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (66.735404ms)

                                                
                                                
-- stdout --
	* [multinode-152613-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20151
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20151-530330/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-530330/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-152613-m02' is duplicated with machine name 'multinode-152613-m02' in profile 'multinode-152613'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-152613-m03 --driver=kvm2  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-152613-m03 --driver=kvm2  --container-runtime=containerd: (44.438363375s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-152613
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-152613: exit status 80 (213.251305ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-152613 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-152613-m03 already exists in multinode-152613-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-152613-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-152613-m03: (1.001992896s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.77s)

                                                
                                    
x
+
TestPreload (269.7s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-155368 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0120 12:09:29.425646  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-155368 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m55.617956643s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-155368 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-155368 image pull gcr.io/k8s-minikube/busybox: (2.261388341s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-155368
E0120 12:12:05.509363  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/functional-935944/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-155368: (1m30.769435402s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-155368 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-155368 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (59.774254537s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-155368 image list
helpers_test.go:175: Cleaning up "test-preload-155368" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-155368
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-155368: (1.061186869s)
--- PASS: TestPreload (269.70s)

                                                
                                    
x
+
TestScheduledStopUnix (117.07s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-857748 --memory=2048 --driver=kvm2  --container-runtime=containerd
E0120 12:14:12.498073  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-857748 --memory=2048 --driver=kvm2  --container-runtime=containerd: (45.351881442s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-857748 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-857748 -n scheduled-stop-857748
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-857748 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0120 12:14:15.080562  537581 retry.go:31] will retry after 73.897µs: open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/scheduled-stop-857748/pid: no such file or directory
I0120 12:14:15.081739  537581 retry.go:31] will retry after 156.562µs: open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/scheduled-stop-857748/pid: no such file or directory
I0120 12:14:15.082859  537581 retry.go:31] will retry after 294.473µs: open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/scheduled-stop-857748/pid: no such file or directory
I0120 12:14:15.083993  537581 retry.go:31] will retry after 353.737µs: open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/scheduled-stop-857748/pid: no such file or directory
I0120 12:14:15.085123  537581 retry.go:31] will retry after 276.582µs: open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/scheduled-stop-857748/pid: no such file or directory
I0120 12:14:15.086242  537581 retry.go:31] will retry after 872.886µs: open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/scheduled-stop-857748/pid: no such file or directory
I0120 12:14:15.087365  537581 retry.go:31] will retry after 1.466421ms: open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/scheduled-stop-857748/pid: no such file or directory
I0120 12:14:15.089561  537581 retry.go:31] will retry after 1.130001ms: open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/scheduled-stop-857748/pid: no such file or directory
I0120 12:14:15.091769  537581 retry.go:31] will retry after 3.501542ms: open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/scheduled-stop-857748/pid: no such file or directory
I0120 12:14:15.095982  537581 retry.go:31] will retry after 5.551983ms: open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/scheduled-stop-857748/pid: no such file or directory
I0120 12:14:15.102201  537581 retry.go:31] will retry after 4.428804ms: open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/scheduled-stop-857748/pid: no such file or directory
I0120 12:14:15.107397  537581 retry.go:31] will retry after 12.683575ms: open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/scheduled-stop-857748/pid: no such file or directory
I0120 12:14:15.120659  537581 retry.go:31] will retry after 13.957942ms: open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/scheduled-stop-857748/pid: no such file or directory
I0120 12:14:15.134927  537581 retry.go:31] will retry after 13.825529ms: open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/scheduled-stop-857748/pid: no such file or directory
I0120 12:14:15.149198  537581 retry.go:31] will retry after 15.290125ms: open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/scheduled-stop-857748/pid: no such file or directory
I0120 12:14:15.165465  537581 retry.go:31] will retry after 40.034137ms: open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/scheduled-stop-857748/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-857748 --cancel-scheduled
E0120 12:14:29.428659  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-857748 -n scheduled-stop-857748
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-857748
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-857748 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-857748
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-857748: exit status 7 (70.350087ms)

                                                
                                                
-- stdout --
	scheduled-stop-857748
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-857748 -n scheduled-stop-857748
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-857748 -n scheduled-stop-857748: exit status 7 (77.155667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-857748" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-857748
--- PASS: TestScheduledStopUnix (117.07s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (201.32s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2460985336 start -p running-upgrade-020475 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2460985336 start -p running-upgrade-020475 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m7.22915225s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-020475 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-020475 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m10.264535265s)
helpers_test.go:175: Cleaning up "running-upgrade-020475" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-020475
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-020475: (1.187027945s)
--- PASS: TestRunningBinaryUpgrade (201.32s)

                                                
                                    
x
+
TestKubernetesUpgrade (166.44s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-144895 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-144895 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m3.934403088s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-144895
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-144895: (2.330166913s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-144895 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-144895 status --format={{.Host}}: exit status 7 (89.971191ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-144895 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-144895 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (37.064502172s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-144895 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-144895 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-144895 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (100.490814ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-144895] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20151
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20151-530330/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-530330/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-144895
	    minikube start -p kubernetes-upgrade-144895 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1448952 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.0, by running:
	    
	    minikube start -p kubernetes-upgrade-144895 --kubernetes-version=v1.32.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-144895 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-144895 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m1.405985925s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-144895" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-144895
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-144895: (1.456562234s)
--- PASS: TestKubernetesUpgrade (166.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-020336 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-020336 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (90.672388ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-020336] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20151
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20151-530330/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-530330/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (92.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-020336 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-020336 --driver=kvm2  --container-runtime=containerd: (1m32.069293854s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-020336 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (92.35s)

                                                
                                    
x
+
TestPause/serial/Start (137.04s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-516772 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-516772 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (2m17.04065348s)
--- PASS: TestPause/serial/Start (137.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (50.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-020336 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0120 12:17:05.510263  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/functional-935944/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-020336 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (49.062757492s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-020336 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-020336 status -o json: exit status 2 (305.481374ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-020336","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-020336
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (50.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-912009 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-912009 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (110.82071ms)

                                                
                                                
-- stdout --
	* [false-912009] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20151
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20151-530330/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-530330/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 12:17:17.969125  572544 out.go:345] Setting OutFile to fd 1 ...
	I0120 12:17:17.969271  572544 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:17:17.969286  572544 out.go:358] Setting ErrFile to fd 2...
	I0120 12:17:17.969293  572544 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:17:17.969522  572544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-530330/.minikube/bin
	I0120 12:17:17.970217  572544 out.go:352] Setting JSON to false
	I0120 12:17:17.971460  572544 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":7180,"bootTime":1737368258,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 12:17:17.971585  572544 start.go:139] virtualization: kvm guest
	I0120 12:17:17.974133  572544 out.go:177] * [false-912009] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 12:17:17.975576  572544 notify.go:220] Checking for updates...
	I0120 12:17:17.975621  572544 out.go:177]   - MINIKUBE_LOCATION=20151
	I0120 12:17:17.976881  572544 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 12:17:17.978177  572544 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20151-530330/kubeconfig
	I0120 12:17:17.979690  572544 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-530330/.minikube
	I0120 12:17:17.980989  572544 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 12:17:17.982081  572544 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 12:17:17.983790  572544 config.go:182] Loaded profile config "NoKubernetes-020336": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I0120 12:17:17.983899  572544 config.go:182] Loaded profile config "pause-516772": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 12:17:17.984007  572544 config.go:182] Loaded profile config "running-upgrade-020475": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.1
	I0120 12:17:17.984107  572544 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 12:17:18.023857  572544 out.go:177] * Using the kvm2 driver based on user configuration
	I0120 12:17:18.025174  572544 start.go:297] selected driver: kvm2
	I0120 12:17:18.025186  572544 start.go:901] validating driver "kvm2" against <nil>
	I0120 12:17:18.025197  572544 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 12:17:18.027129  572544 out.go:201] 
	W0120 12:17:18.028622  572544 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0120 12:17:18.029954  572544 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-912009 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-912009

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-912009

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-912009

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-912009

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-912009

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-912009

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-912009

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-912009

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-912009

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-912009

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912009"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912009"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912009"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-912009

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912009"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912009"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-912009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-912009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-912009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-912009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-912009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-912009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-912009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-912009" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912009"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912009"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912009"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912009"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912009"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-912009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-912009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-912009" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912009"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912009"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912009"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912009"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912009"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20151-530330/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 20 Jan 2025 12:16:57 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.61.15:8443
name: NoKubernetes-020336
contexts:
- context:
cluster: NoKubernetes-020336
extensions:
- extension:
last-update: Mon, 20 Jan 2025 12:16:57 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: NoKubernetes-020336
name: NoKubernetes-020336
current-context: NoKubernetes-020336
kind: Config
preferences: {}
users:
- name: NoKubernetes-020336
user:
client-certificate: /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/NoKubernetes-020336/client.crt
client-key: /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/NoKubernetes-020336/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-912009

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912009"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912009"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912009"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912009"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912009"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912009"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912009"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912009"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912009"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912009"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912009"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912009"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912009"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912009"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912009"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912009"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912009"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912009"

                                                
                                                
----------------------- debugLogs end: false-912009 [took: 3.030706975s] --------------------------------
helpers_test.go:175: Cleaning up "false-912009" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-912009
--- PASS: TestNetworkPlugins/group/false (3.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (34.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-020336 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-020336 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (34.82522596s)
--- PASS: TestNoKubernetes/serial/Start (34.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-020336 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-020336 "sudo systemctl is-active --quiet service kubelet": exit status 1 (199.335505ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (32.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (18.769915463s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (13.681101427s)
--- PASS: TestNoKubernetes/serial/ProfileList (32.45s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (41.94s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-516772 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-516772 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (41.917894962s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (41.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-020336
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-020336: (1.613969202s)
--- PASS: TestNoKubernetes/serial/Stop (1.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (51.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-020336 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-020336 --driver=kvm2  --container-runtime=containerd: (51.612074177s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (51.61s)

                                                
                                    
x
+
TestPause/serial/Pause (0.81s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-516772 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.81s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.26s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-516772 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-516772 --output=json --layout=cluster: exit status 2 (257.118762ms)

                                                
                                                
-- stdout --
	{"Name":"pause-516772","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-516772","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.26s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-516772 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.70s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.89s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-516772 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.89s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.84s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-516772 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.84s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.49s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-020336 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-020336 "sudo systemctl is-active --quiet service kubelet": exit status 1 (219.516497ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (186.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3950473159 start -p stopped-upgrade-054872 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3950473159 start -p stopped-upgrade-054872 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m40.224698941s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3950473159 -p stopped-upgrade-054872 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3950473159 -p stopped-upgrade-054872 stop: (2.360362351s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-054872 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0120 12:21:48.579195  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/functional-935944/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:22:05.510260  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/functional-935944/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-054872 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m23.930063532s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (186.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (224.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-808623 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-808623 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (3m44.377068179s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (224.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (158.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-677886 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-677886 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.0: (2m38.783920192s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (158.78s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-054872
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-054872: (1.093469927s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-354924 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-354924 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.0: (1m24.650263119s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-677886 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [914455e0-75b7-4a66-a565-d566062b6620] Pending
helpers_test.go:344: "busybox" [914455e0-75b7-4a66-a565-d566062b6620] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [914455e0-75b7-4a66-a565-d566062b6620] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004889632s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-677886 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-677886 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-677886 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-677886 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-677886 --alsologtostderr -v=3: (1m31.039572705s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-808623 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [493118b7-82ff-42c6-be2a-3756ae868e4d] Pending
helpers_test.go:344: "busybox" [493118b7-82ff-42c6-be2a-3756ae868e4d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [493118b7-82ff-42c6-be2a-3756ae868e4d] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004477669s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-808623 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-808623 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-808623 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (91.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-808623 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-808623 --alsologtostderr -v=3: (1m31.405757512s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (91.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-354924 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c245817b-67bc-47fe-97bd-820ee4e4861f] Pending
helpers_test.go:344: "busybox" [c245817b-67bc-47fe-97bd-820ee4e4861f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c245817b-67bc-47fe-97bd-820ee4e4861f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.005101385s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-354924 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (49.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-408900 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.0
E0120 12:24:29.425776  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-408900 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.0: (49.369733535s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (49.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-354924 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-354924 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-354924 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-354924 --alsologtostderr -v=3: (1m31.472098138s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-677886 -n no-preload-677886
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-677886 -n no-preload-677886: exit status 7 (77.928285ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-677886 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-408900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-408900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.107293043s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-408900 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-408900 --alsologtostderr -v=3: (7.334656286s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-408900 -n newest-cni-408900
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-408900 -n newest-cni-408900: exit status 7 (77.505334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-408900 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (33.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-408900 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-408900 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.0: (33.577996634s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-408900 -n newest-cni-408900
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (33.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-808623 -n old-k8s-version-808623
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-808623 -n old-k8s-version-808623: exit status 7 (73.090054ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-808623 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (149.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-808623 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-808623 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m28.816414006s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-808623 -n old-k8s-version-808623
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (149.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-408900 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-408900 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-408900 -n newest-cni-408900
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-408900 -n newest-cni-408900: exit status 2 (249.472394ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-408900 -n newest-cni-408900
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-408900 -n newest-cni-408900: exit status 2 (241.070747ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-408900 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-408900 -n newest-cni-408900
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-408900 -n newest-cni-408900
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (59.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-565837 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-565837 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.0: (59.78248193s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (59.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-354924 -n default-k8s-diff-port-354924
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-354924 -n default-k8s-diff-port-354924: exit status 7 (109.76842ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-354924 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (315.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-354924 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-354924 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.0: (5m15.059558741s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-354924 -n default-k8s-diff-port-354924
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (315.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-565837 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e19d2283-4d8f-43a8-aef7-4aae5829f416] Pending
E0120 12:27:05.509617  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/functional-935944/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [e19d2283-4d8f-43a8-aef7-4aae5829f416] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e19d2283-4d8f-43a8-aef7-4aae5829f416] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004555141s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-565837 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-565837 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-565837 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.016987182s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-565837 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-565837 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-565837 --alsologtostderr -v=3: (1m31.063990067s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-7hq62" [b09b36c7-efc9-4a56-99b0-6acb4dc2d65a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005030154s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-7hq62" [b09b36c7-efc9-4a56-99b0-6acb4dc2d65a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004098504s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-808623 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-808623 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-808623 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-808623 -n old-k8s-version-808623
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-808623 -n old-k8s-version-808623: exit status 2 (251.774556ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-808623 -n old-k8s-version-808623
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-808623 -n old-k8s-version-808623: exit status 2 (251.794231ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-808623 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-808623 -n old-k8s-version-808623
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-808623 -n old-k8s-version-808623
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (61.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-912009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-912009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m1.848993672s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (61.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-565837 -n embed-certs-565837
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-565837 -n embed-certs-565837: exit status 7 (73.360368ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-565837 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-dbmnf" [6eb47f09-d56c-4bf6-9139-dee76f37cc02] Running
E0120 12:29:29.425668  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004503363s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-912009 "pgrep -a kubelet"
I0120 12:29:35.295238  537581 config.go:182] Loaded profile config "kindnet-912009": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-912009 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-zzv5z" [ee5058e0-2496-42bc-8835-7a1be38adde1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-zzv5z" [ee5058e0-2496-42bc-8835-7a1be38adde1] Running
E0120 12:29:41.955469  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/old-k8s-version-808623/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004926433s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-912009 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-912009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-912009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (90.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-912009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
E0120 12:30:22.917680  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/old-k8s-version-808623/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:30:52.499890  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-912009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m30.788648447s)
--- PASS: TestNetworkPlugins/group/auto/Start (90.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-6kv8k" [2582013f-4fb8-403d-9953-8cad401abf27] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003815101s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-6kv8k" [2582013f-4fb8-403d-9953-8cad401abf27] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004810804s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-354924 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-912009 "pgrep -a kubelet"
I0120 12:31:31.929773  537581 config.go:182] Loaded profile config "auto-912009": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-912009 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-wsxm8" [98e87a2c-8eda-49a1-8456-4ff083d8e6de] Pending
helpers_test.go:344: "netcat-5d86dc444-wsxm8" [98e87a2c-8eda-49a1-8456-4ff083d8e6de] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-wsxm8" [98e87a2c-8eda-49a1-8456-4ff083d8e6de] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.005516541s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-354924 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-354924 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-354924 -n default-k8s-diff-port-354924
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-354924 -n default-k8s-diff-port-354924: exit status 2 (260.63977ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-354924 -n default-k8s-diff-port-354924
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-354924 -n default-k8s-diff-port-354924: exit status 2 (261.599993ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-354924 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-354924 -n default-k8s-diff-port-354924
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-354924 -n default-k8s-diff-port-354924
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (74.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-912009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-912009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m14.477989342s)
--- PASS: TestNetworkPlugins/group/flannel/Start (74.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-912009 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-912009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-912009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (96.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-912009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
E0120 12:32:05.509749  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/functional-935944/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-912009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m36.078810586s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (96.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-bwwmt" [c80e64d8-87f3-45d6-8e5f-4a1b05cb576f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004891655s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-912009 "pgrep -a kubelet"
I0120 12:33:01.402402  537581 config.go:182] Loaded profile config "flannel-912009": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-912009 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-gcrbb" [c3b808bd-4b85-4a36-b801-f49076a1cf17] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-gcrbb" [c3b808bd-4b85-4a36-b801-f49076a1cf17] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.004645749s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-912009 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-912009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-912009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (62.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-912009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-912009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m2.443526187s)
--- PASS: TestNetworkPlugins/group/bridge/Start (62.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-912009 "pgrep -a kubelet"
I0120 12:33:35.558489  537581 config.go:182] Loaded profile config "enable-default-cni-912009": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-912009 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-l4m2d" [a9eeda30-9d40-4ae7-bb69-329fc7682982] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-l4m2d" [a9eeda30-9d40-4ae7-bb69-329fc7682982] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004664937s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-912009 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-912009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-912009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (82.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-912009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
E0120 12:34:27.296001  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/default-k8s-diff-port-354924/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:34:27.302483  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/default-k8s-diff-port-354924/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:34:27.313896  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/default-k8s-diff-port-354924/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:34:27.335313  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/default-k8s-diff-port-354924/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:34:27.376820  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/default-k8s-diff-port-354924/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:34:27.458302  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/default-k8s-diff-port-354924/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:34:27.619876  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/default-k8s-diff-port-354924/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:34:27.941271  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/default-k8s-diff-port-354924/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:34:28.583050  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/default-k8s-diff-port-354924/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:34:28.681068  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/old-k8s-version-808623/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:34:29.079402  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/kindnet-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:34:29.086203  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/kindnet-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:34:29.097733  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/kindnet-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:34:29.119237  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/kindnet-912009/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-912009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m22.008242725s)
--- PASS: TestNetworkPlugins/group/calico/Start (82.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-912009 "pgrep -a kubelet"
E0120 12:34:29.160683  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/kindnet-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:34:29.242929  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/kindnet-912009/client.crt: no such file or directory" logger="UnhandledError"
I0120 12:34:29.377385  537581 config.go:182] Loaded profile config "bridge-912009": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-912009 replace --force -f testdata/netcat-deployment.yaml
E0120 12:34:29.405294  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/kindnet-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:34:29.425684  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-mnxks" [8c82bf64-d120-4fbe-8ed5-d7d8cb3c577f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0120 12:34:29.727251  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/kindnet-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:34:29.865437  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/default-k8s-diff-port-354924/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:34:30.369247  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/kindnet-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:34:31.651285  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/kindnet-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:34:32.427054  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/default-k8s-diff-port-354924/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-mnxks" [8c82bf64-d120-4fbe-8ed5-d7d8cb3c577f] Running
E0120 12:34:34.212995  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/kindnet-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:34:37.549033  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/default-k8s-diff-port-354924/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.005220432s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-912009 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-912009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-912009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (78.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-912009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
E0120 12:35:08.272105  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/default-k8s-diff-port-354924/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:35:10.058836  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/kindnet-912009/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-912009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m18.488495352s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (78.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-58f5q" [4c659cf9-7e8b-4f9e-a251-005a41562c7c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.02044182s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-912009 "pgrep -a kubelet"
I0120 12:35:32.473352  537581 config.go:182] Loaded profile config "calico-912009": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-912009 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context calico-912009 replace --force -f testdata/netcat-deployment.yaml: (1.109864493s)
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-7lwjl" [fee1691d-edce-467b-b30d-df93a23cf461] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-7lwjl" [fee1691d-edce-467b-b30d-df93a23cf461] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005289299s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-912009 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-912009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-912009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-912009 "pgrep -a kubelet"
I0120 12:36:13.970796  537581 config.go:182] Loaded profile config "custom-flannel-912009": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-912009 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-6599k" [8a05d0e7-0a0a-4529-a6aa-d6686bc5e3d2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-6599k" [8a05d0e7-0a0a-4529-a6aa-d6686bc5e3d2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.004427433s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-912009 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-912009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-912009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)
E0120 12:36:42.422705  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/auto-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:36:52.664793  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/auto-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:37:05.509921  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/functional-935944/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:37:11.155163  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/default-k8s-diff-port-354924/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:37:12.943066  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/kindnet-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:37:13.146627  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/auto-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:37:54.108821  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/auto-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:37:55.169985  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/flannel-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:37:55.176393  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/flannel-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:37:55.187702  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/flannel-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:37:55.209209  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/flannel-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:37:55.250639  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/flannel-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:37:55.332194  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/flannel-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:37:55.493831  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/flannel-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:37:55.815542  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/flannel-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:37:56.456996  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/flannel-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:37:57.738610  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/flannel-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:38:00.300560  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/flannel-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:38:05.422478  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/flannel-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:38:15.664431  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/flannel-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:38:28.581781  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/functional-935944/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:38:35.863705  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/enable-default-cni-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:38:35.870124  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/enable-default-cni-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:38:35.881566  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/enable-default-cni-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:38:35.903040  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/enable-default-cni-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:38:35.944510  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/enable-default-cni-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:38:36.026011  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/enable-default-cni-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:38:36.146680  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/flannel-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:38:36.188106  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/enable-default-cni-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:38:36.509821  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/enable-default-cni-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:38:37.151857  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/enable-default-cni-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:38:38.433685  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/enable-default-cni-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:38:40.994992  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/enable-default-cni-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:38:46.117200  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/enable-default-cni-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:38:56.359515  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/enable-default-cni-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:39:00.980236  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/old-k8s-version-808623/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:39:16.030900  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/auto-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:39:16.840921  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/enable-default-cni-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:39:17.108537  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/flannel-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:39:27.295746  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/default-k8s-diff-port-354924/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:39:29.079860  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/kindnet-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:39:29.425608  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:39:29.628283  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/bridge-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:39:29.634677  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/bridge-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:39:29.646126  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/bridge-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:39:29.667605  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/bridge-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:39:29.709143  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/bridge-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:39:29.790670  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/bridge-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:39:29.952193  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/bridge-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:39:30.273750  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/bridge-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:39:30.915849  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/bridge-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:39:32.197415  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/bridge-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:39:34.759638  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/bridge-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:39:39.881266  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/bridge-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:39:50.123325  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/bridge-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:39:54.996557  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/default-k8s-diff-port-354924/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:39:56.785038  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/kindnet-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:39:57.802514  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/enable-default-cni-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:40:10.604685  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/bridge-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:40:26.069041  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/calico-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:40:26.075436  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/calico-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:40:26.086811  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/calico-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:40:26.108168  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/calico-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:40:26.149603  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/calico-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:40:26.230998  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/calico-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:40:26.392610  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/calico-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:40:26.714287  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/calico-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:40:27.356455  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/calico-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:40:28.637955  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/calico-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:40:31.199357  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/calico-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:40:36.320741  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/calico-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:40:39.031017  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/flannel-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:40:46.562029  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/calico-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:40:51.566107  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/bridge-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:41:07.043669  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/calico-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:41:14.185543  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:41:14.191954  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:41:14.203359  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:41:14.224720  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:41:14.266124  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:41:14.347597  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:41:14.509131  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:41:14.830719  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:41:15.472925  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:41:16.755011  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:41:19.317159  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:41:19.724290  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/enable-default-cni-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:41:24.439401  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:41:32.167716  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/auto-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:41:34.680922  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:41:48.005170  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/calico-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:41:55.162338  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:41:59.872846  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/auto-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:42:05.510092  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/functional-935944/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:42:13.488758  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/bridge-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:42:36.124401  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:42:55.169715  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/flannel-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:43:09.926596  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/calico-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:43:22.872457  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/flannel-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:43:35.863967  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/enable-default-cni-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:43:58.045974  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:44:00.979962  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/old-k8s-version-808623/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:44:03.565976  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/enable-default-cni-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:44:27.295227  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/default-k8s-diff-port-354924/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:44:29.079342  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/kindnet-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:44:29.426094  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:44:29.627688  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/bridge-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:44:57.330800  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/bridge-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:45:24.043201  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/old-k8s-version-808623/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:45:26.069020  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/calico-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:45:53.768842  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/calico-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:46:14.184756  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:46:32.167824  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/auto-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:46:41.887913  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:47:05.509543  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/functional-935944/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:47:32.501879  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:47:55.169996  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/flannel-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:48:35.864148  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/enable-default-cni-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:49:00.980339  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/old-k8s-version-808623/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:49:27.295728  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/default-k8s-diff-port-354924/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:49:29.079414  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/kindnet-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:49:29.426154  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/addons-861226/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:49:29.627834  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/bridge-912009/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:50:26.069156  537581 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/calico-912009/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    

Test skip (38/320)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.0/cached-images 0
15 TestDownloadOnly/v1.32.0/binaries 0
16 TestDownloadOnly/v1.32.0/kubectl 0
20 TestDownloadOnlyKic 0
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
126 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
127 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
128 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestGvisorAddon 0
178 TestImageBuild 0
205 TestKicCustomNetwork 0
206 TestKicExistingNetwork 0
207 TestKicCustomSubnet 0
208 TestKicStaticIP 0
240 TestChangeNoneUser 0
243 TestScheduledStopWindows 0
245 TestSkaffold 0
247 TestInsufficientStorage 0
251 TestMissingContainerUpgrade 0
260 TestStartStop/group/disable-driver-mounts 0.15
267 TestNetworkPlugins/group/kubenet 3.37
275 TestNetworkPlugins/group/cilium 3.63
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-359189" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-359189
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-912009 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-912009

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-912009

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-912009

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-912009

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-912009

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-912009

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-912009

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-912009

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-912009

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-912009

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912009"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912009"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912009"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-912009

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912009"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912009"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-912009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-912009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-912009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-912009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-912009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-912009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-912009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-912009" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912009"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912009"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912009"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912009"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912009"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-912009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-912009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-912009" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912009"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912009"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912009"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912009"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912009"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20151-530330/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 20 Jan 2025 12:16:57 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.61.15:8443
name: NoKubernetes-020336
contexts:
- context:
cluster: NoKubernetes-020336
extensions:
- extension:
last-update: Mon, 20 Jan 2025 12:16:57 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: NoKubernetes-020336
name: NoKubernetes-020336
current-context: NoKubernetes-020336
kind: Config
preferences: {}
users:
- name: NoKubernetes-020336
user:
client-certificate: /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/NoKubernetes-020336/client.crt
client-key: /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/NoKubernetes-020336/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-912009

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912009"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912009"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912009"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912009"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912009"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912009"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912009"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912009"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912009"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912009"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912009"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912009"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912009"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912009"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912009"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912009"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912009"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912009"

                                                
                                                
----------------------- debugLogs end: kubenet-912009 [took: 3.227491331s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-912009" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-912009
--- SKIP: TestNetworkPlugins/group/kubenet (3.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-912009 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-912009

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-912009

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-912009

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-912009

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-912009

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-912009

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-912009

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-912009

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-912009

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-912009

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912009"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912009"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912009"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-912009

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912009"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912009"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-912009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-912009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-912009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-912009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-912009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-912009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-912009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-912009" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912009"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912009"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912009"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912009"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912009"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-912009

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-912009

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-912009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-912009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-912009

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-912009

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-912009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-912009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-912009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-912009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-912009" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912009"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912009"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912009"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912009"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912009"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20151-530330/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 20 Jan 2025 12:16:57 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.61.15:8443
name: NoKubernetes-020336
contexts:
- context:
cluster: NoKubernetes-020336
extensions:
- extension:
last-update: Mon, 20 Jan 2025 12:16:57 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: NoKubernetes-020336
name: NoKubernetes-020336
current-context: NoKubernetes-020336
kind: Config
preferences: {}
users:
- name: NoKubernetes-020336
user:
client-certificate: /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/NoKubernetes-020336/client.crt
client-key: /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/NoKubernetes-020336/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-912009

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912009"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912009"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912009"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912009"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912009"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912009"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912009"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912009"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912009"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912009"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912009"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912009"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912009"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912009"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912009"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912009"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912009"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-912009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912009"

                                                
                                                
----------------------- debugLogs end: cilium-912009 [took: 3.474152611s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-912009" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-912009
--- SKIP: TestNetworkPlugins/group/cilium (3.63s)

                                                
                                    
Copied to clipboard